OOM problem

1. Basic knowledge preparation

Hard disk: Also called disk, used to store data. You store your songs, pictures, and videos on your hard drive.

Memory: Since the hard disk is slow to read, if all data is read directly from the hard disk while the CPU is running the program, the efficiency will be affected. So the CPU reads the data needed to run the program from the hard disk into memory. Then the CPU and memory in the data for calculation, exchange. Memory is volatile memory (data disappears after a power failure). The memory area is the storage inside the computer (on the motherboard) that holds intermediate data and results of CPU operations. Memory is the bridge between the program and the CPU. Read data from the hard disk or run a program to the CPU.

Virtual memory is a technology of memory management in computer system. It makes the program think it has contiguous free memory, when in fact it is usually split into physical memory fragments, some of which may be temporarily stored on external disk (hard disk) storage (with data from the hard disk exchanged into memory when needed). It is called virtual memory on Windows systems and swap space on Linux/Unix systems.

IOS doesn’t support swap space? It’s not just iOS that doesn’t support swap space, most mobile systems don’t. Since flash memory is the main storage of mobile devices, its read and write speed is much smaller than the hard disk used by computers. That is to say, even if mobile phones use swap space technology, it cannot improve performance because of the slow flash memory, so there is no swap space technology.

2. IOS memory knowledge

Memory (RAM), like CPU, is one of the rarest resources in a system and can easily be contested. Applied memory is directly related to performance. IOS has no swap space as an alternative resource, so memory resources are particularly important.

What is OOM? Out-of-memory is short for out-of-memory, which literally means exceeding the memory limit. Is divided into FOOM (Foreground Out Of Memory, application breaks down during Foreground operation. This crash will cause active users to lose, which is very undesirable for business) and BOOM (Background Out Of Memory, the application running in the Background crashes). It is a non-mainstream Crash caused by the iOS Jetsam mechanism and cannot be captured by the monitoring scheme Signal.

What is the Jetsam mechanic? The Jetsam mechanism can be understood as a management mechanism adopted by the system to control the overuse of memory resources. The Jetsam mechanism runs in a separate process, and each process has a memory threshold that Jetsam kills immediately if it exceeds this threshold.

Why was Jetsam designed? Because the memory of the device is limited, memory resources are very important. System processes and other apps in use preempt this resource. Since iOS does not support swap space, once a low memory event is triggered, Jetsam will release the memory where the App is located as much as possible. In this way, when the iOS system has insufficient memory, the App will be killed by the system and the crash will be realized.

OOM is triggered in two cases: The system will kill the App with lower priority based on the priority policy because the overall memory usage is too high. If the current App reaches the “Highg Water mark”, the system will also kill the current App (exceeding the system’s memory limit for the current single App).

Read the source code (xnu/ BSD /kern/ kern_memoryStatus.c) and you will find that there are also two mechanisms for memory killing, as follows

Highwater processing -> Our App cannot exceed a single memory limit

  1. Loop through the priority list looking for threads
  2. Check whether p_memSTAT_memlimit is met
  3. DiagonoseActive and FREEZE filter
  4. Kill process, exit if successful, otherwise loop

Memorystatus_act_aggressive processing -> High memory usage, killed according to priority

  1. Jld_bucket_count = policy home = jLD_bucket_count
  2. Start with JETSAM_PRIORITY_ELEVATED_INACTIVE
  3. Old_bucket_count and memoryStatus_jLD_EVAL_period_msecs Determine whether to kill
  4. Memorystatus_avail_pages_below_pressure is killed from lowest priority to highest priority

Several cases of too much memory

  • App memory consumption is low, and other apps have great memory management, so even if we switch to another App, our own App is still “alive” and retains the user status. Experience is good
  • App memory consumption is low, but other apps consume too much memory (either because of poor memory management, or because they are inherently resource-intensive, like games), and all apps except for the foreground threads are killed by the system and recycled to provide memory for active processes.
  • If an App consumes a large amount of memory, the system will preferentially kill the App that consumes a large amount of memory due to memory shortage even if the memory requested by other apps is not large. When the user exits the App to the background and opens it again later, the App is reloaded and started.
  • App memory consumption is very large, and it will be killed by the system when running in the foreground, resulting in flash back.

When the App memory is insufficient, the system will make more space for use according to certain policies. A common practice is to move some low-priority data to disk. This operation is called Page out. When the data is accessed later, the system takes care of moving it back into memory, an operation called Page in.

Memory page** is the smallest unit of Memory management and is allocated by the system. It may hold multiple objects in a single page, or a large object may span multiple pages. Typically it is 16KB in size and has three types of page.

  • Clean Memory Clean Memory includes three types: Memory that can be page out, memory-mapped files, and frameworks used by the App (each framework has a _DATA_CONST segment, which is usually clean, but becomes dirty with runtime swizling).

    All allocated pages are clean at first (except for objects allocated in the heap) and our App data becomes dirty when written. A file read from the hard disk into memory is also read-only, clean page.

  • Dirty Memory

    Dirty memory consists of four categories: memory that has been written to by App, objects allocated by all heap, image decoding buffer, and framework (the framework has _DATA and _DATA_DIRTY segments, which are all Dirty).

    The use of singletons or global initialization methods can help reduce Dirty memory generated during the use of the framework. (Because singletons are not destroyed once created, they remain in memory and are not considered Dirty memory by the system.)

  • Compressed Memory

    Due to flash capacity and read/write limitations, iOS does not swap space. Instead, memory Compressor was introduced in iOS7. It is in the memory is tight when the recent period of time not used memory objects, memory compressor will compress the object, release more pages. It is decompressed and reused by the memory compressor as needed. Improves response speed while saving memory.

    For example, when the App uses a Framework with an NSDictionary attribute to store data, it uses 3 pages of memory. When it is not accessed recently, memory Compressor compresses this memory to 1 page. Restore to 3 pages when used again.

App running memory = pageNumbers * pageSize. Because Compressed Memory is Dirty Memory. Memory footprint = dirtySize + CompressedSize

The upper limit of memory usage varies according to devices. The upper limit of App usage is higher, while the upper limit of extension usage is lowerEXC_RESOURCE_EXCEPTION.

Next, I’ll talk about how to get the memory limit and how to monitor apps for being forcibly killed for taking up too much memory.

3. Obtain memory information

3.1 Calculate the memory limit using the JetsamEvent log

When the App is killed by Jetsam, the phone generates a system log. View path: Settings- privacy-Analytics & Improvements- Analytics Data (Settings-Privacy-Analytics), You can see the log in the form of jetsamEvent-2020-03-14-161828. ips, starting with JetsamEvent. These JetsamEvent logs are left by the iOS kernel to kill apps with low priority (idle, FrontMost, suspended) that use more memory than the system memory limit.

Logs contain App memory information. The pageSize field is displayed at the top of the log. Search for per-process-limit, rPages * pageSize in the structure of the node, and obtain the OOM threshold.

The largestProcess field in the log indicates the App name. The reason field indicates the memory cause. The States field represents the state of the App when it crashes (Idle, suspended, frontMost…). .

In order to test the accuracy of the data, I completely quit all apps on the two devices (iPhone 6s Plus /13.3.1, iPhone 11 Pro/13.3.1) and ran only one Demo App to test the memory threshold. Loop memory request, ViewController code is as follows

- (void)viewDidLoad { [super viewDidLoad]; NSMutableArray *array = [NSMutableArray array]; for (NSInteger index = 0; index < 10000000; index++) { UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)]; UIImage *image = [UIImage imageNamed:@"AppIcon"]; imageView.image = image; [array addObject:imageView]; }}Copy the code

IPhone 6S Plus /13.3.1

{"bug_type":"298","timestamp":"2020-03-19 17:23:45.94 +0800"," OS_version ":"iPhone OS 13.3.1 (17D50)","incident_id":"DA8AF66D-24E8-458C-8734-981866942168"} { "crashReporterKey" : "Fc9b659ce486df1ed1b8062d5c7c977a7eb8c851", "the kernel", "Darwin kernel Version 19.3.0: Thu Jan 9 21:10:44 PST 2020; Root :xnu-6153.82.3~1\/RELEASE_ARM64_S8000", "product" : "iPhone8,2", "incident" : "Da8af66d-24e8-458c-8734-98186694 34-168 "," Date ": "2020-03-19 17:23:45.93 +0800"," Build ": "IPhone OS 13.3.1 (17D50)", "timeDelta" : 797, "memoryStatus" : {"compressorSize" : 797, "compressions" : 7458651, "decompressions" : 5190200, "zoneMapCap" : 744407040, "largestZone" : "APFS_4K_OBJS", "largestZoneSize" : 41402368, "pageSize" : 16384, "uncompressed" : 104065, "zoneMapSize" : 141606912, "memoryPages" : { "active" : 26214, "throttled" : 0, "fileBacked" : 14903, "wired" : 20019, "anonymous" : 37140, "purgeable" : 142, "inactive" : 23669, "free" : 2967, "speculative" : 2160 } }, "largestProcess" : "Test", "genCounter" : 0, "processes" : [ { "uuid" : "39c5738b-b321-3865-a731-68064c4f7a6f", "states" : [ "daemon", "idle" ], "lifetimeMax" : 188, "age" : 948223699030, "purgeable" : 0, "fds" : 25, "coalition" : 422, "rpages" : 177, "pid" : 282, "idleDelta" : 824711280, "name" : ". Com. Apple Safari. SafeBrowsing. Se ", "cpuTime" : 10.275422000000001}, {" uuid ": / /... "83dbf121-7c0c-3ab5-9b66-77ee926e1561", "states" : [ "frontmost" ], "killDelta" : 2592, "genCount" : 0, "age" : 1531004794, "purgeable" : 0, "fds" : 50, "coalition" : 1047, "rpages" : 92806, "reason" : "per-process-limit", "pid" : 2384, "cpuTime" : 59.46437399999999999, "name" : "Test", "lifetimeMax" : 92806}, //...] }Copy the code

For iPhone 6s plus/13.3.1 OOM threshold :(16384 x 92806)/(1024 x 1024)=1450.09375M

IPhone 11 Pro/13.3.1

{"bug_type":"298","timestamp":"2020-03-19 17:30:28.39 +0800"," OS_version ":"iPhone OS 13.3.1 (17D50)","incident_id":"7F111601-BC7A-4BD7-A468-CE3370053057"} { "crashReporterKey" : "Bc2445adc164c399b330f812a48248e029e26276", "the kernel", "Darwin kernel Version 19.3.0: Thu Jan 9 21:11:10 PST 2020; Root :xnu-6153.82.3~1\/RELEASE_ARM64_T8030", "product" : "iPhone12,3", "incident" : "7F111601-bc7a-4bd7-a468-CE3370053057 "," Date ": "2020-03-19 17:30:28.39 +0800", "build" : "IPhone OS 13.3.1 (17D50)", "timeDelta" : 189, "memoryStatus" : {"compressorSize" : 66443, "compressions" : 25498129, "decompressions" : 15532621, "zoneMapCap" : 1395015680, "largestZone" : "APFS_4K_OBJS", "largestZoneSize" : 41222144, "pageSize" : 16384, "uncompressed" : 127027, "zoneMapSize" : 169639936, "memoryPages" : { "active" : 58652, "throttled" : 0, "fileBacked" : 20291, "wired" : 45838, "anonymous" : 96445, "purgeable" : 4, "inactive" : 54368, "free" : 5461, "supporting" : 3716}}, "processes" : [{"uuid" : 3}}, "processes" : "2dd5eb1e-fd31-36c2-99d9-bcbff44efbb7", "states" : [ "daemon", "idle" ], "lifetimeMax" : 171, "age" : 5151034269954, "purgeable" : 0, "fds" : 50, "coalition" : 66, "rpages" : 164, "pid" : 11276, "idleDelta" : 3801132318, "name" : "the WCD", "cpuTime" : 3.430787}, {" uuid ": / /... "63158edc-915f-3a2b-975c-0e0ac4ed44c0", "states" : [ "frontmost" ], "killDelta" : 4345, "genCount" : 0, "age" : 654480778, "purgeable" : 0, "fds" : 50, "coalition" : 1718, "rpages" : 134278, "reason" : "per-process-limit", "pid" : 14206, "cpuTime" : 23.955463999999999, "name" : "lifetimeMax" : 134278}, //... }Copy the code

IPhone 11 Pro/13.3.1 OOM threshold :(16384 x 134278)/(1024 x 1024)=2098.09375M

How does iOS discover Jetsam?

MacOS/iOS is a BSD derived system with a Mach kernel, but the interfaces exposed at the top are generally based on the BSD layer wrapped around Mach. Mach is a microkernel architecture where real virtual memory management is done, and BSD provides an upper level interface to memory management. Jetsam events are also generated by BSD. The bsd_init function is the entry point, which basically initializes subsystems such as virtual memory management.

Initialize the kernel memory allocator to Initialize the BSD memory Zone. This Zone is built based on the Mach kernel
kmeminit(a);// 2. Initialise background freezing, a feature unique to iOS that permanently monitors memory and process sleep threads
#if CONFIG_FREEZE
#ifndef CONFIG_MEMORYSTATUS
    #error "CONFIG_FREEZE defined without matching CONFIG_MEMORYSTATUS"
#endif
    /* Initialise background freezing */
    bsd_init_kprintf("calling memorystatus_freeze_init\n");
    memorystatus_freeze_init(a);#endif>

// 3. IOS only, JetSAM (i.e. resident monitor thread for low memory events)
#if CONFIG_MEMORYSTATUS
    /* Initialize kernel memory status notifications */
    bsd_init_kprintf("calling memorystatus_init\n");
    memorystatus_init(a);#endif /* CONFIG_MEMORYSTATUS */
Copy the code

The main function is to open the two highest priority threads, to monitor the memory situation of the entire system.

CONFIG_FREEZE when enabled, the kernel freezes processes instead of killing them. Freezing is done by starting a memoryStatus_freeze_thread in the kernel, which calls the memoryStatus_freeze_top_process on receiving the signal.

IOS starts the highest priority thread, vm_presSURE_monitor, to monitor the system’s memory stress and maintains all App processes through a stack. IOS also maintains a memory snapshot table that holds the consumption of memory pages for each process. The logic for Jetsam, also known as memoryStatus, can be found in the kern_MemoryStatus. h and ** kern_memoryStatus. c ** source codes of the XNU project.

Before the iOS system forcibly kills an App due to high memory usage, the JetsamEvent log is generated within 6 seconds.

As mentioned above, iOS has no swap space, hence the introduction of MemoryStatus (also known as Jetsam). This means freeing up as much memory as possible on iOS for use by the current App. This mechanism is manifested in priority, which is to kill the background application first; If there is still not enough memory, force the current application to be killed. In MacOS, MemoryStatus will only forcibly kill processes marked as idle to exit.

The MemoryStatus mechanism starts a memoryStatus_jetSAM_thread thread that is responsible for force-killing the App and logging, but does not send messages. Therefore, the memory pressure detection thread cannot obtain the message for force-killing the App.

When the monitoring thread detects that an App has memory pressure, it sends a notification, and the App with memory executes the didReceiveMemoryWarning proxy method. At this point, we also have a chance to do some memory resource freeing logic that might prevent the App from being killed by the system.

View the problem from the source point of view

The iOS kernel has an array that maintains thread priorities. Each entry in the array is a structure that contains a linked list of processes. The structure is as follows:

#define MEMSTAT_BUCKET_COUNT (JETSAM_PRIORITY_MAX + 1)

typedef struct memstat_bucket {
    TAILQ_HEAD(, proc) list;
    int count;
} memstat_bucket_t;

memstat_bucket_t memstat_bucket[MEMSTAT_BUCKET_COUNT];
Copy the code

You can see the priority information in kern_memoryStatus.h

#define JETSAM_PRIORITY_IDLE_HEAD                -2
/* The value -1 is an alias to JETSAM_PRIORITY_DEFAULT */
#define JETSAM_PRIORITY_IDLE                      0
#define JETSAM_PRIORITY_IDLE_DEFERRED		  1 /* Keeping this around till all xnu_quick_tests can be moved away from it.*/
#define JETSAM_PRIORITY_AGING_BAND1		  JETSAM_PRIORITY_IDLE_DEFERRED
#define JETSAM_PRIORITY_BACKGROUND_OPPORTUNISTIC  2
#define JETSAM_PRIORITY_AGING_BAND2		  JETSAM_PRIORITY_BACKGROUND_OPPORTUNISTIC
#define JETSAM_PRIORITY_BACKGROUND                3
#define JETSAM_PRIORITY_ELEVATED_INACTIVE	  JETSAM_PRIORITY_BACKGROUND
#define JETSAM_PRIORITY_MAIL                      4
#define JETSAM_PRIORITY_PHONE                     5
#define JETSAM_PRIORITY_UI_SUPPORT                8
#define JETSAM_PRIORITY_FOREGROUND_SUPPORT        9
#define JETSAM_PRIORITY_FOREGROUND               10
#define JETSAM_PRIORITY_AUDIO_AND_ACCESSORY      12
#define JETSAM_PRIORITY_CONDUCTOR                13
#define JETSAM_PRIORITY_HOME                     16
#define JETSAM_PRIORITY_EXECUTIVE                17
#define JETSAM_PRIORITY_IMPORTANT                18
#define JETSAM_PRIORITY_CRITICAL                 19

#define JETSAM_PRIORITY_MAX                      21
Copy the code

It can be clearly seen that the foreground App priority JETSAM_PRIORITY_BACKGROUND is 3, and foreground App priority JETSAM_PRIORITY_FOREGROUND is 10.

The priority rules are as follows: Kernel thread priority > OS priority > App priority. The foreground App has a higher priority than the background App. When the priorities of the threads are the same, the priorities of the threads that occupy the most CPU are reduced.

Possible causes of OOM can be seen in kern_memoryStatus. c:

/* For logging clarity */
static const char *memorystatus_kill_cause_name[] = {
	""								,		/* kMemorystatusInvalid							*/
	"jettisoned"					,		/* kMemorystatusKilled							*/
	"highwater"						,		/* kMemorystatusKilledHiwat						*/
	"vnode-limit"					,		/* kMemorystatusKilledVnodes					*/
	"vm-pageshortage"				,		/* kMemorystatusKilledVMPageShortage			*/
	"proc-thrashing"				,		/* kMemorystatusKilledProcThrashing				*/
	"fc-thrashing"					,		/* kMemorystatusKilledFCThrashing				*/
	"per-process-limit"				,		/* kMemorystatusKilledPerProcessLimit			*/
	"disk-space-shortage"			,		/* kMemorystatusKilledDiskSpaceShortage			*/
	"idle-exit"						,		/* kMemorystatusKilledIdleExit					*/
	"zone-map-exhaustion"			,		/* kMemorystatusKilledZoneMapExhaustion			*/
	"vm-compressor-thrashing"		,		/* kMemorystatusKilledVMCompressorThrashing		*/
	"vm-compressor-space-shortage"	,		/* kMemorystatusKilledVMCompressorSpaceShortage	*/
};
Copy the code

Look at the key code for initializing the Jetsam thread in the function MemoryStatus_init

__private_extern__ void
memorystatus_init(void)
{
	// ...
  /* Initialize the jetsam_threads state array */
	jetsam_threads = kalloc(sizeof(struct jetsam_thread_state) * max_jetsam_threads);
  
	/* Initialize all the jetsam threads */
	for (i = 0; i < max_jetsam_threads; i++) {

		result = kernel_thread_start_priority(memorystatus_thread, NULL.95 /* MAXPRI_KERNEL */, &jetsam_threads[i].thread);
		if (result == KERN_SUCCESS) {
			jetsam_threads[i].inited = FALSE;
			jetsam_threads[i].index = i;
			thread_deallocate(jetsam_threads[i].thread);
		} else {
			panic("Could not create memorystatus_thread %d", i); }}}Copy the code
/* * High-level priority assignments * ************************************************************************* * 127 Reserved (real-time) * A * + * (32 levels) * + * V * 96 Reserved (real-time) * 95 Kernel mode only * A * + * (16 levels)  * + * V * 80 Kernel mode only * 79 System high priority * A * + * (16 levels) * + * V * 64 System high priority * 63 Elevated priorities * A * + * (12 levels) * + * V * 52 Elevated priorities * 51 Elevated priorities (incl. BSD +nice) * A * + * (20 levels) * + * V * 32 Elevated priorities (incl. BSD +nice) * 31 Default (default base for threads) * 30 Lowered priorities (incl. BSD -nice) * A * + * (20 levels) * + * V * 11 Lowered priorities (incl. BSD -nice) * 10 Lowered priorities (aged pri's) * A * + * (11 levels) * + * V * 0 Lowered priorities (aged pri's / idle) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /Copy the code

As you can see, user-mode applications cannot have higher threads than the operating system and kernel. Furthermore, thread priorities are assigned differently between user-mode applications, such as foreground applications having higher priority than background applications. The highest priority app on iOS is SpringBoard; In addition, thread priorities are not immutable. Mach dynamically adjusts thread priorities based on thread utilization and overall system load. Thread priority is reduced if the CPU is consumed too much, and increased if the thread is starved too much. However, the program cannot exceed the priority range of the thread in which it resides.

Max_jetsam_threads (1 in general, 3 in special cases) jetSAM threads are enabled, depending on kernel startup parameters and device performance. These threads have a priority of 95. MAXPRI_KERNEL (note that 95 is the thread priority, XNU thread priority range: 0 to 127. The macro defined above is the process priority, ranging from -2 to 19).

Next, I’ll examine the memoryStatus_thread function, which is responsible for initialization of thread starts

static void
memorystatus_thread(void *param __unused, wait_result_t wr __unused)
{
  / /...
  while (memorystatus_action_needed()) {
		boolean_t killed;
		int32_t priority;
		uint32_t cause;
		uint64_t jetsam_reason_code = JETSAM_REASON_INVALID;
		os_reason_t jetsam_reason = OS_REASON_NULL;

		cause = kill_under_pressure_cause;
		switch (cause) {
			case kMemorystatusKilledFCThrashing:
				jetsam_reason_code = JETSAM_REASON_MEMORY_FCTHRASHING;
				break;
			case kMemorystatusKilledVMCompressorThrashing:
				jetsam_reason_code = JETSAM_REASON_MEMORY_VMCOMPRESSOR_THRASHING;
				break;
			case kMemorystatusKilledVMCompressorSpaceShortage:
				jetsam_reason_code = JETSAM_REASON_MEMORY_VMCOMPRESSOR_SPACE_SHORTAGE;
				break;
			case kMemorystatusKilledZoneMapExhaustion:
				jetsam_reason_code = JETSAM_REASON_ZONE_MAP_EXHAUSTION;
				break;
			case kMemorystatusKilledVMPageShortage:
				/* falls through */
			default:
				jetsam_reason_code = JETSAM_REASON_MEMORY_VMPAGESHORTAGE;
				cause = kMemorystatusKilledVMPageShortage;
				break;
		}

		/* Highwater */
		boolean_t is_critical = TRUE;
		if (memorystatus_act_on_hiwat_processes(&errors, &hwm_kill, &post_snapshot, &is_critical)) {
			if (is_critical == FALSE) {
				/* * For now, don't kill any other processes. */
				break;
			} else {
				goto done;
			}
		}

		jetsam_reason = os_reason_create(OS_REASON_JETSAM, jetsam_reason_code);
		if (jetsam_reason == OS_REASON_NULL) {
			printf("memorystatus_thread: failed to allocate jetsam reason\n");
		}

		if (memorystatus_act_aggressive(cause, jetsam_reason, &jld_idle_kills, &corpse_list_purged, &post_snapshot)) {
			goto done;
		}

		/* * memorystatus_kill_top_process() drops a reference, * so take another one so we can continue to use this exit reason * even after it returns */
		os_reason_ref(jetsam_reason);

		/* LRU */
		killed = memorystatus_kill_top_process(TRUE, sort_flag, cause, jetsam_reason, &priority, &errors);
		sort_flag = FALSE;

		if (killed) {
			if (memorystatus_post_snapshot(priority, cause) == TRUE) {

        			post_snapshot = TRUE;
			}

			/* Jetsam Loop Detection */
			if (memorystatus_jld_enabled == TRUE) {
				if((priority == JETSAM_PRIORITY_IDLE) || (priority == system_procs_aging_band) || (priority == applications_aging_band)) {  jld_idle_kills++; }else {
					/* * We've reached into bands beyond idle deferred. * We make no attempt to monitor them */}}if ((priority >= JETSAM_PRIORITY_UI_SUPPORT) && (total_corpses_count(a) >0) && (corpse_list_purged == FALSE)) {
				/* * If we have jetsammed a process in or above JETSAM_PRIORITY_UI_SUPPORT * then we attempt to relieve pressure by purging corpse memory. */
				task_purge_all_corpses(a); corpse_list_purged = TRUE; }goto done;
		}
		
		if (memorystatus_avail_pages_below_critical()) {
			/* * Still under pressure and unable to kill a process - purge corpse memory */
			if (total_corpses_count(a) >0) {
				task_purge_all_corpses(a); corpse_list_purged = TRUE; }if (memorystatus_avail_pages_below_critical()) {
				/* * Still under pressure and unable to kill a process - panic */
				panic("memorystatus_jetsam_thread: no victim! available pages:%llu\n", (uint64_t)memorystatus_available_pages);
			}
		}
			
done:	

}
Copy the code

You can see that it starts a loop, with memoryStatus_action_needed () as a condition to keep freeing memory.

static boolean_t
memorystatus_action_needed(void)
{
#if CONFIG_EMBEDDED
	return (is_reason_thrashing(kill_under_pressure_cause) ||
			is_reason_zone_map_exhaustion(kill_under_pressure_cause) ||
	       memorystatus_available_pages <= memorystatus_available_pages_pressure);
#else /* CONFIG_EMBEDDED */
	return (is_reason_thrashing(kill_under_pressure_cause) ||
			is_reason_zone_map_exhaustion(kill_under_pressure_cause));
#endif /* CONFIG_EMBEDDED */
}
Copy the code

It determines whether the current memory resource is tight by the memory pressure sent by VM_pagepout. Several cases: Is_reason_thrashing: The Mach Zone exhausted IS_Reason_zone_map_exhaustion, and the number of available pages fell below the memory status_available_pages threshold.

Memorystatus_thread (memoryStatus_thread) : An OOM of the high-water type will be triggered when memory is low, that is, an OOM will occur if a process has exceeded its maximum memory hight water mark. Memorystatus_act_on_hiwat_processes () finds the lowest priority process in the memstat_bucket priority group by memorystatus_kill_hiwat_proc(), If the memory usage of a process is less than the threshold (footprint_in_bytes <= memlimit_in_bytes), the process continues to search for lower-priority processes until the process whose memory usage exceeds the threshold is found and killed.

It is generally difficult for a single App to reach the High Water Mark without ending any progress and ending up on memoryStatus_ACT_AGGRESSIVE, where most OOM takes place.

static boolean_t
memorystatus_act_aggressive(uint32_t cause, os_reason_t jetsam_reason, int *jld_idle_kills, boolean_t *corpse_list_purged, boolean_t *post_snapshot)
{
	// ...
  if ( (jld_bucket_count == 0) || 
		     (jld_now_msecs > (jld_timestamp_msecs + memorystatus_jld_eval_period_msecs))) {

			/* * Refresh evaluation parameters */
			jld_timestamp_msecs	 = jld_now_msecs;
			jld_idle_kill_candidates = jld_bucket_count;
			*jld_idle_kills		 = 0;
			jld_eval_aggressive_count = 0;
			jld_priority_band_max	= JETSAM_PRIORITY_UI_SUPPORT;
		}
  / /...
}
Copy the code

As you can see from the preceding code, the determination of whether to execute kill is based on a time interval of jLD_NOW_msecs > (JLD_TIMESTAMp_msecs + memoryStatus_jLD_EVAL_period_msecs). That is, the kill occurs after the memoryStatus_jLD_EVAL_PERIOd_msecs condition.

/* Jetsam Loop Detection */
if (max_mem <= (512 * 1024 * 1024)) {
	/* 512 MB devices */
memorystatus_jld_eval_period_msecs = 8000;	/* 8000 msecs == 8 second window */
} else {
	/* 1GB and larger devices */
memorystatus_jld_eval_period_msecs = 6000;	/* 6000 msecs == 6 second window */
}
Copy the code

Memorystatus_jld_eval_period_msecs the minimum value is 6 seconds. So we can do something in six seconds.

3.2 The developers collate the results

Stackoverflow has data that compiles the OOM threshold for various devices

device crash amount:MB total amount:MB percentage of total
iPad1 127 256 49%
iPad2 275 512 53%
iPad3 645 1024 62%
IPad4 (iOS 8.1) 585 1024 57%
Pad Mini 1st Generation 297 512 58%
The Mini retina (iOS 7.1) 696 1024 68%
iPad Air 697 1024 68%
The Air 2 (iOS 10.2.1) 1383 2048 68%
IPad Pro 9.7″ iOS 10.0.2 (14A456) 1395 1971 71%
IPad Pro 10.5 “(iOS 11 Beta4) 3057 4000 76%
IPad Pro 12.9 “(2015)(iOS 11.2.1) 3058 3999 76%
The 10.2 (iOS 13.2.3) 1844 2998 62%
IPod Touch 4th Gen (iOS 6.1.1) 130 256 51%
iPod touch 5th gen 286 512 56%
iPhone4 325 512 63%
iPhone4s 286 512 56%
iPhone5 645 1024 62%
iPhone5s 646 1024 63%
iPhone6(iOS 8.x) 645 1024 62%
iPhone6 Plus(iOS 8.x) 645 1024 62%
IPhone6s (iOS 9.2) 1396 2048 68%
IPhone6s Plus (iOS 10.2.1) 1396 2048 68%
IPhoneSE (iOS 9.3) 1395 2048 68%
IPhone7 (iOS 10.2) 1395 2048 68%
IPhone7 Plus (iOS 10.2.1) 2040 3072 66%
IPhone8 (iOS 12.1) 1364 1990 70%
IPhoneX (iOS 11.2.1) 1392 2785 50%
IPhoneXS (iOS 12.1) 2040 3754 54%
IPhoneXS Max (iOS 12.1) 2039 3735 55%
IPhoneXR (iOS 12.1) 1792 2813 63%
IPhone11 (iOS 13.1.3) 2068 3844 54%
IPhone11 Pro Max (iOS 13.2.3) 2067 3740 55%

3.3 Trigger the current App’s High water Mark

We can write a timer, continuously apply for memory, and then print the current memory footprint by phys_footprint. Logically speaking, constantly apply for memory will trigger Jetsam mechanism and force App to be killed, so the last printed memory footprint is also the memory upper limit of the current device.

The timer = [NSTimer scheduledTimerWithTimeInterval: 0.01 target: self selector: @ the selector (allocateMemory) the userInfo: nil repeats:YES]; - (void)allocateMemory { UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)]; UIImage *image = [UIImage imageNamed:@"AppIcon"]; imageView.image = image; [array addObject:imageView]; memoryLimitSizeMB = [self usedSizeOfMemory]; if (memoryWarningSizeMB && memoryLimitSizeMB) { NSLog(@"----- memory warnning:%dMB, memory limit:%dMB", memoryWarningSizeMB, memoryLimitSizeMB); } } - (int)usedSizeOfMemory { task_vm_info_data_t taskInfo; mach_msg_type_number_t infoCount = TASK_VM_INFO_COUNT; kern_return_t kernReturn = task_info(mach_task_self(), TASK_VM_INFO, (task_info_t)&taskInfo, &infoCount); if (kernReturn ! = KERN_SUCCESS) { return 0; } the return (int) (taskInfo phys_footprint / 1024.0/1024.0); }Copy the code

3.4 obtaining the IP address in iOS13

Size_t os_proc_available_memory(void) in < OS /proc.h>; You can view the current available memory.

Return Value

The number of bytes that the app may allocate before it hits its memory limit. If the calling process isn’t an app, or if the process has already exceeded its memory limit, this function returns 0.

Discussion

Call this function to determine the amount of memory available to your app. The returned value corresponds to the current memory limit minus the memory footprint of your app at the time of the function call. Your app’s memory footprint consists of the data that you allocated in RAM, and that must stay in RAM (or the equivalent) at all times. Memory limits can change during the app life cycle and don’t necessarily correspond to the amount of physical memory available on the device.

Use the returned value as advisory information only and don’t cache it. The precise value changes when your app does any work that affects memory, which can happen frequently.

Although this function lets you determine the amount of memory your app may safely consume, don’t use it to maximize your app’s memory usage. Significant memory use, even when under the current memory limit, affects system performance. For example, when your app consumes all of its available memory, the system may need to terminate other apps and system processes to accommodate your app’s requests. Instead, always consume the smallest amount of memory you need to be responsive to the user’s needs.

If you need more detailed information about the available memory resources, you can call task_info. However, be aware that task_info is an expensive call, whereas this function is much more efficient.

If (@available(iOS 13.0, *)) {return OS_PROC_available_memory () / 1024.0/1024.0; }Copy the code

The API for App memory information can be found in the Mach layer. The mach_task_basic_info structure stores the memory usage of the Mach task, where phys_footprint is the physical memory used by the application. Virtual_size Indicates the virtual memory size.

#define MACH_TASK_BASIC_INFO     20         /* always 64-bit basic info */
struct mach_task_basic_info {
    mach_vm_size_t  virtual_size;       /* virtual memory size (bytes) */
    mach_vm_size_t  resident_size;      /* resident memory size (bytes) */
    mach_vm_size_t  resident_size_max;  /* maximum resident memory size (bytes) */
    time_value_t    user_time;          /* total user run time for
                                            terminated threads */
    time_value_t    system_time;        /* total system run time for
                                            terminated threads */
    policy_t        policy;             /* default policy for new threads */
    integer_t       suspend_count;      /* suspend count for task */
};
Copy the code

So the get code is

task_vm_info_data_t vmInfo; mach_msg_type_number_t count = TASK_VM_INFO_COUNT; kern_return_t kr = task_info(mach_task_self(), TASK_VM_INFO, (task_info_t)&vmInfo, &count); if (kr ! = KERN_SUCCESS) { return ; } CGFloat memoryUsed = (CGFloat)(vminfo.phys_footprint /1024.0/1024.0);Copy the code

Anyone wondering if it should not be resident_size to get memory usage? The initial test found a large gap between resident_size and Xcode measurements. Using phys_footprint is close to the result given by Xcode. And can be verified from WebKit source code.

In iOS13, the available memory can be obtained from the OS_PROC_available_memory command, and the memory occupied by the App can be obtained from the phys_footprint command. The sum of two is the memory upper limit of the current device. Exceeding that triggers the Jetsam mechanism.

- (CGFloat)limitSizeOfMemory {if (@available(iOS 13.0, *)) {task_vm_info_data_t taskInfo; mach_msg_type_number_t infoCount = TASK_VM_INFO_COUNT; kern_return_t kernReturn = task_info(mach_task_self(), TASK_VM_INFO, (task_info_t)&taskInfo, &infoCount); if (kernReturn ! = KERN_SUCCESS) { return 0; } return (CGFloat)((taskinfo.phys_footprint + OS_proc_available_memory ())/(1024.0 * 1024.0); } return 0; }Copy the code

Current available memory: 1435.936752MB; Currently occupied MEMORY: 14.5MB, threshold: 1435.936752MB + 14.5MB= 1450.436MB, the threshold is the same as the threshold obtained in Method 3.1. “iPhone 6s Plus /13.3.1 OOM threshold is: (16384 * 92806)/(1024 * 1024) = 1450.09375 M “.

3.5 Obtaining the Memory Limit Using XNU

In XNU, there are functions and macros dedicated to getting memory upper limits. The memoryStatus_priorITY_entry structure provides priority and memory limit values for all processes.

typedef struct memorystatus_priority_entry {
  pid_t pid;
  int32_t priority;
  uint64_t user_data;
  int32_t limit;
  uint32_t state;
} memorystatus_priority_entry_t;
Copy the code

Priority indicates the priority of a process and limit indicates the memory limit of a process. However, this method requires root permission, and since there is no jailbreak device, I haven’t tried it.

Refer to the kern_memoryStatus.h file for the code. Uint32_t flags, void *buffer, size_t buffersize);

/* Commands */
#define MEMORYSTATUS_CMD_GET_PRIORITY_LIST            1
#define MEMORYSTATUS_CMD_SET_PRIORITY_PROPERTIES      2
#define MEMORYSTATUS_CMD_GET_JETSAM_SNAPSHOT          3
#define MEMORYSTATUS_CMD_GET_PRESSURE_STATUS          4
#define MEMORYSTATUS_CMD_SET_JETSAM_HIGH_WATER_MARK   5    /* Set active memory limit = inactive memory limit, both non-fatal	*/
#define MEMORYSTATUS_CMD_SET_JETSAM_TASK_LIMIT	      6    /* Set active memory limit = inactive memory limit, both fatal	*/
#define MEMORYSTATUS_CMD_SET_MEMLIMIT_PROPERTIES      7    /* Set memory limits plus attributes independently */
#define MEMORYSTATUS_CMD_GET_MEMLIMIT_PROPERTIES      8    /* Get memory limits plus attributes */
#define MEMORYSTATUS_CMD_PRIVILEGED_LISTENER_ENABLE   9    /* Set the task's status as a privileged listener w.r.t memory notifications */
#define MEMORYSTATUS_CMD_PRIVILEGED_LISTENER_DISABLE  10   /* Reset the task's status as a privileged listener w.r.t memory notifications */
#define MEMORYSTATUS_CMD_AGGRESSIVE_JETSAM_LENIENT_MODE_ENABLE  11   /* Enable the 'lenient' mode for aggressive jetsam. See comments in kern_memorystatus.c near the top. */
#define MEMORYSTATUS_CMD_AGGRESSIVE_JETSAM_LENIENT_MODE_DISABLE 12   /* Disable the 'lenient' mode for aggressive jetsam. */
#define MEMORYSTATUS_CMD_GET_MEMLIMIT_EXCESS          13   /* Compute how much a process's phys_footprint exceeds inactive memory limit */
#define MEMORYSTATUS_CMD_ELEVATED_INACTIVEJETSAMPRIORITY_ENABLE 	14 /* Set the inactive jetsam band for a process to JETSAM_PRIORITY_ELEVATED_INACTIVE */
#define MEMORYSTATUS_CMD_ELEVATED_INACTIVEJETSAMPRIORITY_DISABLE 	15 /* Reset the inactive jetsam band for a process to the default band (0)*/
#define MEMORYSTATUS_CMD_SET_PROCESS_IS_MANAGED       16   /* (Re-)Set state on a process that marks it as (un-)managed by a system entity e.g. assertiond */
#define MEMORYSTATUS_CMD_GET_PROCESS_IS_MANAGED       17   /* Return the 'managed' status of a process */
#define MEMORYSTATUS_CMD_SET_PROCESS_IS_FREEZABLE     18   /* Is the process eligible for freezing? Apps and extensions can pass in FALSE to opt out of freezing, i.e.,
Copy the code

Pseudo code

struct memorystatus_priority_entry memStatus[NUM_ENTRIES];
size_t count = sizeof(struct memorystatus_priority_entry) * NUM_ENTRIES;
int kernResult = memorystatus_control(MEMORYSTATUS_CMD_GET_PRIORITY_LIST, 0, 0, memStatus, count);
if (rc < 0) {
  NSLog(@"memorystatus_control"); 
	return ;
}

int entry = 0;
for (; rc > 0; rc -= sizeof(struct memorystatus_priority_entry)){
  printf ("PID: %5d\tPriority:%2d\tUser Data: %llx\tLimit:%2d\tState:%s\n",
          memstatus[entry].pid,
          memstatus[entry].priority,
          memstatus[entry].user_data,
          memstatus[entry].limit,
          state_to_text(memstatus[entry].state));
  entry++;
}
Copy the code

The for loop prints pid, Priority, User Data, Limit, and State information for each process (i.e. App). Find the process with priority 10 in the log, which is our foreground running App. Why 10? #define JETSAM_PRIORITY_FOREGROUND 10 our purpose is to get the foreground App memory upper limit.

4. How to determine OOM

Before OOM causes crash, will app receive low memory warning?

Do two groups of comparative experiments:

// Experiment 1 NSMutableArray *array = [NSMutableArray array]; for (NSInteger index = 0; index < 10000000; index++) { NSString *filePath = [[NSBundle mainBundle] pathForResource:@"Info" ofType:@"plist"]; NSData *data = [NSData dataWithContentsOfFile:filePath]; [array addObject:data]; }Copy the code
Viewcontroller. m - (void)viewDidLoad {[super viewDidLoad]; dispatch_async(dispatch_get_global_queue(0, 0), ^{ NSMutableArray *array = [NSMutableArray array]; for (NSInteger index = 0; index < 10000000; index++) { NSString *filePath = [[NSBundle mainBundle] pathForResource:@"Info" ofType:@"plist"]; NSData *data = [NSData dataWithContentsOfFile:filePath]; [array addObject:data]; }}); } - (void)didReceiveMemoryWarning { NSLog(@"2"); } // AppDelegate.m - (void)applicationDidReceiveMemoryWarning:(UIApplication *)application { NSLog(@"1"); }Copy the code

Phenomenon:

  1. So in viewDidLoad, which is the main thread, if you consume too much memory, you don’t get a low memory warning, you Crash. The main thread is busy because memory is growing too fast.
  2. In the case of multiple threads, your App will get a low memory warning because its memory is growing too fast, in the AppDelegateapplicationDidReceiveMemoryWarningExecute first, followed by the current VC’sdidReceiveMemoryWarning.

Conclusion:

A low memory warning does not necessarily cause a Crash, because the system has 6 seconds to determine whether the memory is low or not. If an OOM occurs, you will not receive a low memory warning.

5. Collect memory information

To accurately locate the problem, you need to dump all objects and their memory information. When the memory reaches the upper limit of the system memory, collect and record the required information and upload the information to the server for analysis and repair.

You also need to know in which function each object was created so that you can restore the “crime scene.”

Source code (libmalloc/malloc), memory allocation functions malloc and calloc, etc. default use nano_zone, nano_zone is less than 256B memory allocation, larger than 256B using scalable_zone allocation.

It mainly monitors the allocation of large memory. The malloc function uses malloc_zone_malloc, and calloc uses malloc_zone_calloc.

Functions that allocate memory using scalable_zone call malloc_logger because the system has a place to count and manage memory allocation. This design also meets the “closing principle”.

void *
malloc(size_t size)
{
	void *retval;
	retval = malloc_zone_malloc(default_zone, size);
	if (retval == NULL) {
		errno = ENOMEM;
	}
	return retval;
}

void *
calloc(size_t num_items, size_t size)
{
	void *retval;
	retval = malloc_zone_calloc(default_zone, num_items, size);
	if (retval == NULL) {
		errno = ENOMEM;
	}
	return retval;
}
Copy the code

First let’s see what the default_zone is

typedef struct {
	malloc_zone_t malloc_zone;
	uint8_t pad[PAGE_MAX_SIZE - sizeof(malloc_zone_t)];
} virtual_default_zone_t;

static virtual_default_zone_t virtual_default_zone
__attribute__((section("__DATA,__v_zone")))
__attribute__((aligned(PAGE_MAX_SIZE))) = {
	NULL.NULL,
	default_zone_size,
	default_zone_malloc,
	default_zone_calloc,
	default_zone_valloc,
	default_zone_free,
	default_zone_realloc,
	default_zone_destroy,
	DEFAULT_MALLOC_ZONE_STRING,
	default_zone_batch_malloc,
	default_zone_batch_free,
	&default_zone_introspect,
	10,
	default_zone_memalign,
	default_zone_free_definite_size,
	default_zone_pressure_relief,
	default_zone_malloc_claimed_address,
};

static malloc_zone_t *default_zone = &virtual_default_zone.malloc_zone;

static void *
default_zone_malloc(malloc_zone_t *zone, size_t size)
{
	zone = runtime_default_zone(a);return zone->malloc(zone, size);
}


MALLOC_ALWAYS_INLINE
static inline malloc_zone_t *
runtime_default_zone(a) {
	return (lite_zone) ? lite_zone : inline_malloc_default_zone(a); }Copy the code

You can see that default_zone is initialized in this way

static inline malloc_zone_t *
inline_malloc_default_zone(void)
{
	_malloc_initialize_once();
	// malloc_report(ASL_LEVEL_INFO, "In inline_malloc_default_zone with %d %d\n", malloc_num_zones, malloc_has_debug_zone);
	return malloc_zones[0];
}
Copy the code

_malloc_initialize -> create_scalable_zone -> create_scalable_szone -> create_scalable_szone -> create_scalable_szone So we get our default_zone.

malloc_zone_t *
create_scalable_zone(size_t initial_size, unsigned debug_flags) {
	return (malloc_zone_t *) create_scalable_szone(initial_size, debug_flags);
}
Copy the code
void *malloc_zone_malloc(malloc_zone_t *zone, size_t size)
{
  MALLOC_TRACE(TRACE_malloc | DBG_FUNC_START, (uintptr_t)zone, size, 0.0);
  void *ptr;
  if (malloc_check_start && (malloc_check_counter++ >= malloc_check_start)) {
    internal_check(a); }if (size > MALLOC_ABSOLUTE_MAX_SIZE) {
    return NULL;
  }
  ptr = zone->malloc(zone, size);
  // After the zone is allocated memory, the malloc_logger is used to record
  if (malloc_logger) {
    malloc_logger(MALLOC_LOG_TYPE_ALLOCATE | MALLOC_LOG_TYPE_HAS_ZONE, (uintptr_t)zone, (uintptr_t)size, 0, (uintptr_t)ptr, 0);
  }
  MALLOC_TRACE(TRACE_malloc | DBG_FUNC_END, (uintptr_t)zone, size, (uintptr_t)ptr, 0);
  return ptr;
}
Copy the code

The allocation implementation is zone->malloc which, according to the previous analysis, is the corresponding malloc implementation in the szone_t structure object.

After the szone is created, the following initialization operations are performed.

// Initialize the security token.
szone->cookie = (uintptr_t)malloc_entropy[0];

szone->basic_zone.version = 12;
szone->basic_zone.size = (void *)szone_size;
szone->basic_zone.malloc = (void *)szone_malloc;
szone->basic_zone.calloc = (void *)szone_calloc;
szone->basic_zone.valloc = (void *)szone_valloc;
szone->basic_zone.free = (void *)szone_free;
szone->basic_zone.realloc = (void *)szone_realloc;
szone->basic_zone.destroy = (void *)szone_destroy;
szone->basic_zone.batch_malloc = (void *)szone_batch_malloc;
szone->basic_zone.batch_free = (void *)szone_batch_free;
szone->basic_zone.introspect = (struct malloc_introspection_t *)&szone_introspect;
szone->basic_zone.memalign = (void *)szone_memalign;
szone->basic_zone.free_definite_size = (void *)szone_free_definite_size;
szone->basic_zone.pressure_relief = (void *)szone_pressure_relief;
szone->basic_zone.claimed_address = (void *)szone_claimed_address;
Copy the code

Other functions that allocate memory using scalable_zone have similar methods, so large memory allocations will eventually be called to the malloc_logger function, regardless of how external functions wrap them. Therefore, we can hook the function with Fishhook, record the memory allocation, and upload it to the server for analysis and repair in combination with certain data reporting mechanism.

// For logging VM allocation and deallocation, arg1 here
// is the mach_port_name_t of the target task in which the
// alloc or dealloc is occurring. For example, for mmap()
// that would be mach_task_self(), but for a cross-task-capable
// call such as mach_vm_map(), it is the target task.

typedef void (malloc_logger_t)(uint32_t type, uintptr_t arg1, uintptr_t arg2, uintptr_t arg3, uintptr_t result, uint32_t num_hot_frames_to_skip);

extern malloc_logger_t *__syscall_logger;
Copy the code

When the malloc_logger and __syscall_logger function Pointers are not empty, malloc/free, vm_allocate/ VM_deallocate and other memory allocation/release Pointers are used to inform the upper layer. This is how the memory debugging tool Malloc Stack works. With these two function Pointers, it is easy to record memory allocation information (including allocation size and allocation stack) for the current living object. The allocation stack can be captured using the Backtrace function, but the captured addresses are virtual memory addresses and symbols cannot be parsed from the symbol table DSYM. The offset slide for each image load is also recorded, such that symbol table address = stack address – slide.

Small tips:

ASLR (Address Space Layout randomization) Common name for the address space of random load, address space configuration randomized, address space layout randomization, is a kind of prevent damage of memory leaks by use of computer security technologies, key data area by randomly placed process addressing space to place the attacker can jump to the memory of a particular location in a reliable way to operate the function. Modern operating systems generally have this mechanism.

Function address add: the actual implementation address of the function;

Function virtual address: vm_add;

ASLR: Random offset of the slide function virtual address loaded into process memory, which varies for each Mach-O slide. Vm_add + slide = add. *(base +offset)= imp.

Since Tencent has also opened source its own OOM location solution, OOMDetector, which has a ready wheel, it can be used well. Therefore, the idea of memory monitoring is to find the memory upper limit given by the system for App, and then dump the memory situation when it is close to the memory upper limit. Assemble basic data information into a qualified report data, and send it to the server after a certain data report strategy. The server consumes data, analyzes and generates reports, and the client engineer analyzes problems according to the reports. The data of different projects will be notified to the owner and developer of the project in the form of email, SMS, enterprise wechat and so on. (If the situation is serious, the developer will be directly called and the supervisor will follow up the result of every step). After troubleshooting, either release a new version or hot fix the problem.

6. What can we do about memory during development

  1. zooming

    WWDC 2018 Session 416 – iOS Memory Deep Dive, using UIImage directly when processing image scaling will consume a portion of Memory while decoding the file, and will consume a lot of Memory when generating intermediate bitmap bitmap. ImageIO does not suffer from these two drawbacks and only takes up the memory of the final image size

    Two groups of comparison experiments were conducted: show the App a picture

    / / method 1: 19.6m UIImage *imageResult = [self scaleImage:[UIImage imageNamed:@"test"] newSize:CGSizeMake(self.view.frame.size.width, self.view.frame.size.height)]; self.imageView.image = imageResult; NSData *data = UIImagePNGRepresentation([UIImage imageNamed:@"test"]); UIImage *imageResult = [self scaledImageWithData:data withSize:CGSizeMake(self.view.frame.size.width, self.view.frame.size.height) scale:3 orientation:UIImageOrientationUp]; self.imageView.image = imageResult; - (UIImage *)scaleImage:(UIImage *)image newSize:(CGSize)newSize { UIGraphicsBeginImageContextWithOptions(newSize, NO, 0); [image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)]; UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return newImage; } - (UIImage *)scaledImageWithData:(NSData *)data withSize:(CGSize)size scale:(CGFloat)scale orientation:(UIImageOrientation)orientation { CGFloat maxPixelSize = MAX(size.width, size.height); CGImageSourceRef sourceRef = CGImageSourceCreateWithData((__bridge CFDataRef)data, nil); NSDictionary *options = @{(__bridge id)kCGImageSourceCreateThumbnailFromImageAlways : (__bridge id)kCFBooleanTrue, (__bridge id)kCGImageSourceThumbnailMaxPixelSize : [NSNumber numberWithFloat:maxPixelSize]}; CGImageRef imageRef = CGImageSourceCreateThumbnailAtIndex(sourceRef, 0, (__bridge CFDictionaryRef)options); UIImage *resultImage = [UIImage imageWithCGImage:imageRef scale:scale orientation:orientation]; CGImageRelease(imageRef); CFRelease(sourceRef); return resultImage; }Copy the code

    You can see that using ImageIO has a lower memory footprint than using UIImage directly to scale.

  2. Use Autoreleasepool wisely

We know that the Autoreleasepool object is released when the RunLoop ends. In ARC, if we are constantly allocating memory, such as various loops, then we need to manually add the AutoReleasepool to avoid a sudden memory surge and OOM.

Contrast experiment

// Experiment 1 NSMutableArray *array = [NSMutableArray array]; for (NSInteger index = 0; index < 10000000; index++) { NSString *indexStrng = [NSString stringWithFormat:@"%zd", index]; NSString *resultString = [NSString stringWithFormat:@"%zd-%@", index, indexStrng]; [array addObject:resultString]; NSMutableArray *array = [NSMutableArray array]; for (NSInteger index = 0; index < 10000000; index++) { @autoreleasepool { NSString *indexStrng = [NSString stringWithFormat:@"%zd", index]; NSString *resultString = [NSString stringWithFormat:@"%zd-%@", index, indexStrng]; [array addObject:resultString]; }}Copy the code

Experiment 1 consumes 739.6m memory, and Experiment 2 consumes 587M memory.

  1. UIGraphicsBeginImageContext and UIGraphicsEndImageContext must appear in pairs, or it will cause leakage of the context. In addition, XCode’s Analyze can also scan for such problems.

  2. WKWebView should be used whenever you open a web page or execute JS. UIWebView occupies a large amount of memory, thus increasing the probability of OOM occurrence in App. WKWebView is a multi-process component, and Network Loading and UI Rendering are performed in other processes. Lower memory overhead than UIWebView.

  3. When working on an SDK or App, use NSCache instead of NSMutableDictionary if the scenario is cache-related. NSCache is a class provided by the system to handle caching. The Memory allocated by NSCache is Purgeable Memory, which can be automatically freed by the system. The combination of NSCache and NSPureableData allows the system to reclaim memory as needed or remove objects when memory is cleaned up.

    Other development habits can’t be described, but good development habits and code awareness need to be practiced.

The content of the article is too long, divided into several chapters, please click to view, if you want to view the whole coherent, please visit here