1. CPU usage

As the central processing unit of a mobile phone, CPU can be said to be the most critical component of a mobile phone. All applications need it to schedule and run, and resources are limited. Therefore, when the CPU continues to run at a high load due to improper design of our APP, the APP will be stuck, the phone will get hot, and the power consumption will be too fast, which will seriously affect the user experience.

Therefore, monitoring the CPU usage of our applications will become very important. So how do we capture CPU usage? !

As we all know, our APP runs on a Mach Task, and a Task may have multiple threads running at the same time. Each thread is the basic unit of CPU utilization. Therefore, we can calculate the CPU usage of APP by obtaining the CPU usage of all threads under the current Mach Task.

The Mach Task is described in OS X and iOS Kernel Programming as follows:

A task is a container object through which virtual memory space and other resources, including devices and other handles, are managed. Strictly speaking, the task of Mach is not what other operating systems would call a process, because Mach, as a microkernel operating system, does not provide “process” logic, but only the most basic implementation. However, in the BSD model, there is a simple 1:1 mapping between the two concepts, with each BSD process (that is, OS X process) associated with a Mach task object underneath.

Conceptual diagram of process subsystem composition in Mac OS X

IOS is based on the Apple Darwin kernel and consists of kernel, XNU, and Runtime. XNU is the Darwin kernel. It is short for “X is not UNIX” and is a hybrid kernel consisting of a Mach microkernel and BSD. The Mach kernel is a lightweight platform that performs only the most basic duties of an operating system, such as processes and threads, virtual memory management, task scheduling, process communication, and messaging mechanisms. The rest of the work, such as file manipulation and device access, is done by the BSD layer.

IOS threading technology is similar to Mac OS X. It is also based on Mach threading technology, in which the Thread_basic_info structure encapsulates the basic information of a single thread:

struct thread_basic_info {
    time_value_t  user_time;      /* user run time */
    time_value_t  system_time;    /* system run time */
    integer_t    cpu_usage;       /* scaled cpu usage percentage */
    policy_t     policy;          /* scheduling policy ineffect */ integer_t run_state; /* run state (see below) */ integer_t flags; /* various flags (see below) */ integer_t suspend_count; / *suspend count for thread */
    integer_t    sleep_time;      /* number of seconds that thread  has been sleeping */
}
Copy the code

A Mach Task contains a list of its threads. The kernel provides a task_threads API call to get a list of threads for a given task, which can then be queried via a thread_info API call, as defined in thread_act.h.

Task_threads stores all threads in the target_task task in an act_list array, where act_listCnt represents the number of threads:

kern_return_t task_threads ( task_t target_task, thread_act_array_t *act_list, mach_msg_type_number_t *act_listCnt ); Thread_info has the following structure:  kern_return_t thread_info ( thread_act_t target_act, thread_flavor_t flavor, Thread_info_t thread_info_out, mach_MSG_type_number_t *thread_info_outCnt, mach_msg_type_number_t *thread_info_outCnt;Copy the code

So let’s get CPU usage:

#import "LSLCpuUsage.h"
#import <mach/task.h>
#import <mach/vm_map.h>
#import <mach/mach_init.h>
#import <mach/thread_act.h>
#import <mach/thread_info.h>@implementation LSLCpuUsage + (double)getCpuUsage { kern_return_t kr; thread_array_t threadList; // Save the thread list of the current Mach task mach_MSG_type_number_t threadCount; // Save the current Mach task thread count thread_info_data_t threadInfo; Mach_msg_type_number_t threadInfoCount; mach_msg_type_number_t threadInfoCount; Thread_basic_info_t threadBasicInfo; // Get the thread list of the task using the "task_threads" API call // mach_task_self_, Kr = task_threads(mach_task_self(), &threadList, &threadCount);if(kr ! = KERN_SUCCESS) {return- 1; } double cpuUsage = 0;for(int i = 0; i < threadCount; i++) { threadInfoCount = THREAD_INFO_MAX; // The flavor parameter is THREAD_BASIC_INFO, which returns the basic thread information, and is defined in the thread_basic_info_t structure. Kr = thread_info(threadList[I], THREAD_BASIC_INFO, (thread_info_t) &threadInfoCount);if(kr ! = KERN_SUCCESS) {return- 1; } threadBasicInfo = (thread_basic_info_t)threadInfo;if(! (threadBasicInfo->flags & TH_FLAGS_IDLE)) { cpuUsage += threadBasicInfo->cpu_usage; Vm_deallocate (mach_task_self(), (vm_offset_t)threadList, threadCount * sizeof(thread_t));returnCpuUsage/(double)TH_USAGE_SCALE * 100.0; } @endCopy the code

2. The memory

Although today’s mobile phones have more and more memory, it is limited after all. If we have too much memory due to improper application design, we may face the risk of being “eliminated” by the system, which will be a devastating experience for users.

The memory usage of a Mach task is stored in the mach_task_basic_info structure, where resident_size indicates the physical memory size used by the application, virtual_size indicates the virtual memory size, and in the task_info.h:

#define MACH_TASK_BASIC_INFO 20 /* always 64-bit basic info */
struct mach_task_basic_info {
        mach_vm_size_t  virtual_size;       /* virtual memory size (bytes) */
        mach_vm_size_t  resident_size;      /* resident memory size (bytes) */
        mach_vm_size_t  resident_size_max;  /* maximum resident memory size (bytes) */
        time_value_t    user_time;          /* total user run time for
                                               terminated threads */
        time_value_t    system_time;        /* total system run time for
                                               terminated threads */
        policy_t        policy;             /* default policy fornew threads */ integer_t suspend_count; / *suspend count for task */
};
Copy the code

The task_infoAPI command returns target_task information based on the specified flavor type. In task.h:

kern_return_t task_info
(
    task_name_t target_task,
    task_flavor_t flavor,
    task_info_t task_info_out,
    mach_msg_type_number_t *task_info_outCnt
);
Copy the code

I have tried the following methods to obtain the memory information, which is basically similar to Tencent GT, but has a big gap with Xcode and Instruments:

+ (double)getResidentMemory {struct mach_task_basic_info; mach_msg_type_number_t count = MACH_TASK_BASIC_INFO_COUNT;if (task_info(mach_task_self(), MACH_TASK_BASIC_INFO, (task_info_t)&info, &count) == KERN_SUCCESS) {
        return info.resident_size / (1024 * 1024);
    } else {
        return1.0; }}Copy the code

Later I saw a blogger discussing this issue, saying that using phys_footprint is the correct solution. Pro test, and the basic value of Xcode similar

+ (double)getMemoryUsage {task_vm_info_data_tvminfo; mach_msg_type_number_t count = TASK_VM_INFO_COUNT;if(task_info(mach_task_self(), TASK_VM_INFO, (task_info_t) &vmInfo, &count) == KERN_SUCCESS) {
        return (double)vmInfo.phys_footprint / (1024 * 1024);
    } else {
        return1.0; }}Copy the code

It was mentioned in the blog that the definition of Phys_footprint could be found in the XNU source code in osfmk/ Kern /task.c. The blogger thought that the formula mentioned in the commentary should calculate the physical memory used by the application.

/*
 * phys_footprint
 *   Physical footprint: This is the sum of:
 *     + (internal - alternate_accounting)
 *     + (internal_compressed - alternate_accounting_compressed)
 *     + iokit_mapped
 *     + purgeable_nonvolatile
 *     + purgeable_nonvolatile_compressed
 *     + page_table
 *
 * internal
 *   The task's anonymous memory, which on iOS is always resident. * * internal_compressed * Amount of this task's internal memory which is held by the compressor.
 *   Such memory is no longer actually resident for the task [i.e., resident in its pmap],
 *   and could be either decompressed back into memory, or paged out to storage, depending
 *   on our implementation.
 *
 * iokit_mapped
 *   IOKit mappings: The total size of all IOKit mappings in this task, regardless of
     clean/dirty or internal/external state].
 *
 * alternate_accounting
 *   The number of internal dirty pages which are part of IOKit mappings. By definition, these pages
 *   are counted in both internal *and* iokit_mapped, so we must subtract them from the total to avoid
 *   double counting.
 */
Copy the code

Of course, I also agree with this.

3. Startup time

The startup time of your APP directly affects users’ first experience and judgment of your APP. If the startup time is too long, not only the experience plummets, but also it may trigger Apple’s Watch Dog mechanism to kill your APP, which will be a tragedy. Users will feel that the APP freezes up and crashes as soon as it starts, and then they can’t use it. They will hold down the APP and click delete. (Xcode does not enable Watch Dog in debug mode, so we must connect the real machine to test our APP.)

Before measuring the startup time of APP, let’s first understand the startup process of APP:

APP Startup Process

APP startup can be divided into two phases: before main() and after main(). Summarized below

T (total App startup time) = T1 (loading time before main()) + T2 (loading time after main()).

T1 = load time of dylib (dynamic link library) and App executable of the system;

After t2 = the main () function performs the AppDelegate class applicationDidFinishLaunching: withOptions: method performs before the end of this period of time.

So we get and optimize APP startup time from these two stages. Let’s first look at how to get the startup time before the main() function is executed.

Measure the time before the main() function is executed

One way to measure the time taken before main() is to check DYLD_PRINT_STATISTICS during real machine debugging (DYLD_PRINT_STATISTICS_DETAILS can be used for more detailed information). The diagram below:

Before main()

The following output is displayed:

Total pre-main time: 34.22 milliseconds (100.0%) Dylib loading time: 14.43 milliseconds (42.1%) Rebase /binding time: 1.82 milliseconds (5.3%) ObjC Setup Time: 3.89 milliseconds (11.3%) Initializer time: 13.99 milliseconds (40.9%) slowest intializers: 2.20 milliseconds (6.4%) libBacktraceRecording. Dylib: 2.90 milliseconds (8.4%) libMainThreadChecker. Dylib: 6.55 milliseconds (2.0%) libswiftCoreImage.dylib: 0.71 milliseconds (2.0%)Copy the code

The system-level dynamic link library is optimized by Apple so it doesn’t take much time, and most of the time t1 is spent on our own App code and linking to third-party libraries.

So how can we reduce the time before calling main()? The points we can optimize are:

Reduce unnecessary frameworks, especially third-party ones, because dynamic linking is time-consuming;

The Check Framework should be set to Optional and Required. If the framework exists on all iOS versions currently supported by the App, it should be set to Required. Otherwise, it should be set to Optional because optional has some additional checks.

Merge or delete some OC classes. For cleaning up classes that are not used in the project, you can use the AppCode code inspection tool:

Remove static variables that are useless

Deletes methods that are not called or that are obsolete

Defer to +initialize things that do not have to be done in the +load method

Try not to use C++ virtual functions (creating virtual tables is expensive)

Measures the elapsed time after the main() function is executed

Time of the second stage of the statistics, we think that is from the main () after applicationDidFinishLaunching: withOptions: method finally, so we can through the way of rbi.

Objective-c projects have main files, so we can get them directly by adding code:

// 1. Add the following code to main.m: CFAbsoluteTime AppStartLaunchTime; int main(int argc, char * argv[]) { AppStartLaunchTime = CFAbsoluteTimeGetCurrent(); . } // 2. Declare extern CFAbsoluteTime AppStartLaunchTime at the beginning of appDelegate. m; / / 3. Finally the AppDelegate. M add dispatch_async didFinishLaunchingWithOptions (dispatch_get_main_queue (), ^ {NSLog (@"App startup time --%f",(CFAbsoluteTimeGetCurrent()-AppStartLaunchTime));
});
Copy the code

We all know that Swift project does not have main file, official explanation as follows:

@UIApplicationMain to a regular Swift file. This causes the compiler to synthesize a mainentry point for your iOS app, and eliminates the need for a “main.swift” file.

That is, adding the @UIApplicationMain flag helps us add the mian function. So if we need to do something else in the mian function, we need to create the main.swift file ourselves, which is also allowed by Apple.

Remove the @UIApplicationMain flag from the AppDelegate class.

Create your own main.swift file and add the program entry

import UIKit

var appStartLaunchTime: CFAbsoluteTime = CFAbsoluteTimeGetCurrent()

UIApplicationMain(
    CommandLine.argc,
    UnsafeMutableRawPointer(CommandLine.unsafeArgv)
        .bindMemory(
            to: UnsafeMutablePointer<Int8>.self,
            capacity: Int(CommandLine.argc)),
    nil,
    NSStringFromClass(AppDelegate.self)
)
Copy the code

The AppDelegate didFinishLaunchingWithOptions: method finally add:

/ / APP startup time time-consuming, from mian function began to end DispatchQueue didFinishLaunchingWithOptions method. The main. Async {print("APP startup time time-consuming, from mian function is to: (CFAbsoluteTimeGetCurrent didFinishLaunchingWithOptions method () - appStartLaunchTime).")}Copy the code

Optimizations after main:

1. Try to use pure code to reduce the use of XIB;

2. Check whether all network requests in the startup phase are sent to asynchronous requests.

3. Check whether time-consuming operations can be performed later or asynchronously.

4. FPS

According to Wikipedia, FPS is short for Frames Per Second, which is the number of Frames transmitted Per Second.

FPS measures the amount of information used to save and display moving video. The more frames per second, the smoother the image will be, and the lower the FPS, the more sluggish it will be, so this is a measure of how well an application will perform in its image rendering process. Generally, as long as the FPS of our APP is between 50 and 60, the user experience is relatively smooth.

The normal refresh rate of the iPhone screen is 60 times per second, which can be interpreted as an FPS of 60. We all know that CADisplayLink is saved at the same screen refresh rate, so can we use it to monitor our FPS? !

First of all,CADisplayLinkWhat is the

CADisplayLink is another NSTimer class provided by CoreAnimation. It always starts before a screen update has been completed. Its interface is designed to be similar to NSTimer, so it is essentially a replacement for the built-in implementation. But unlike timeInterval, which is measured in seconds, CADisplayLink has a frameInterval attribute of integer type that specifies how many frames must be separated before execution. The default is 1, which means that each screen update is preceded by one. But if the animation code executes for more than 1/60th of a second, you can specify a frameInterval of 2, which means the animation executes every other frame (30 frames per second).

Use CADisplayLink to monitor the FPS value of the interface, refer to YYFPSLabel:

import UIKit class LSLFPSMonitor: UILabel { private var link: CADisplayLink = CADisplayLink.init() private var count: NSInteger = 0 private var lastTime: TimeInterval = 0.0 private var fpsColor: UIColor = uicolor. green public var FPS: Double = 0.0 // MARK: -init override init(frame: CGRect) {var f = frameifZero {f.size = CGSize(width: 55.0, height: 22.0)} super.init(frame: f) self.textColor = UIColor.white self.textAlignment = .center self.font = UIFont.init(name:"Menlo", size: 12.0) self.backgroundColor = uicolor.black link = cadisplayLink. init(target: LSLWeakProxy(target: LSLWeakProxy) self), selector:#selector(tick))
        link.add(to: RunLoop.current, forMode: RunLoopMode.commonModes)
    }

    deinit {
        link.invalidate()
    }

    required init?(coder aDecoder: NSCoder) {
        fatalError("init(coder:) has not been implemented") } // MARK: - actions @objc func tick(link: CADisplayLink) { guard lastTime ! = 0else {
            lastTime = link.timestamp
            return
        }

        count += 1
        letTimestamp - lastTime guard delta >= 1.0else {
            return
        }

        lastTime = link.timestamp
        fps = Double(count) / delta
        let fpsText = "(String.init(format: "%.3f", fps)) FPS"
        count = 0

        let attrMStr = NSMutableAttributedString(attributedString: NSAttributedString(string: fpsText))
        ifFPS > 55.0{fpsColor = uicolor.green}else if(FPS >= 50.0 && FPS <= 55.0) {fpsColor = uicolor.yellow}else {
            fpsColor = UIColor.red
        }
        attrMStr.setAttributes([NSAttributedStringKey.foregroundColor:fpsColor], range: NSMakeRange(0, attrMStr.length - 3))
        attrMStr.setAttributes([NSAttributedStringKey.foregroundColor:UIColor.white], range: NSMakeRange(attrMStr.length - 3, 3))
        DispatchQueue.main.async {
            self.attributedText = attrMStr
        }
    }
}
Copy the code

The implementation of CADisplayLink, tested on real machines, does meet the business requirements of monitoring FPS and provides a reference for improving user experience to a large extent, but the values may differ from those of Instruments. Let’s take a look at the possible problems with CADisplayLink.

(1). The value is different from that of Instruments for the following reasons:

CADisplayLink runs within the added RunLoop (usually in the main thread), so it can only detect the frame rate under the current RunLoop. The scheduling timing of tasks managed in RunLoop is affected by the RunLoopMode and CPU busy. Therefore, to locate the exact performance problem, you are advised to use the Instrument to confirm it.

(2). Possible circular reference problems with CADisplayLink.

For example:


let link = CADisplayLink.init(target: self, selector: #selector(tick))

letTimer = timer. Init (timeInterval: 1.0, target: self, selector:#selector(tick), userInfo: nil, repeats: true)
Copy the code

Cause: Both of the above uses have strong references to self. In this case, the timer holds self, and self also holds timer. Circular references cause that when the page is dismissed, both sides cannot be released, resulting in circular references. Weak does not work as well:

weak var weakSelf = self
let link = CADisplayLink.init(target: weakSelf, selector: #selector(tick))
Copy the code

So how do we solve this problem? One might say that calling the timer’s invalidate method in deinit(or dealloc) is not valid because it has created a circular reference and will not go to this method.

The solution provided by the authors of YYKit is to use YYWeakProxy, which does not inherit from NSObject but from NSProxy.

NSProxy

An abstract superclass defining An API for objects that act as standins for other objects or for objects that don't exist yet.

NSProxy is an abstract superclass that defines interfaces for objects, and acts as a proxy for other objects, or for objects that don’t exist.

The modified code is as follows. The pro-test timer is released as desired. The specific implementation code of LSLWeakProxy has been synchronized to Github.

let link = CADisplayLink.init(target: LSLWeakProxy(target: self), selector: #selector(tick))
Copy the code

5. Caton

Before we look at the causes of stuttering, let’s take a look at how screen graphics work.

Principle of screen display image:

6. Screen rendering principle

Today’s mobile devices are basically using dual cache + vertical Sync (V-sync) screen display technology.

As shown in the figure above, the CPU, GPU and display in the system cooperate to complete the display work. The CPU is responsible for calculating what is displayed, such as view creation, layout calculation, image decoding, text drawing, and so on. Then the CPU submits the calculated content to the GPU, which transforms, synthesizes and renders it. The GPU will pre-render a frame into a buffer for the video controller to read, and after the next frame is rendered, the GPU will directly point the video controller’s pointer to the second container (double cache principle). Here, the GPU waits for the display’s VSync (vertical sync) signal before a new frame is rendered and the buffer updated (this solves the tear and increases the smoothness of the picture, but consumes more computing resources and introduces some latency).

Caton’s reasons:

7. The frame

The principle shown on the screen above is a mobile device that uses a vSYNC mechanism. If the CPU or GPU does not complete the content submission within a VSync period, that frame is discarded and displayed at the next opportunity, with the display remaining unchanged. Adding code to the main thread that blocks the main thread from responding to clicks, slides, and UI drawing are common causes of stuttering.

Caton monitoring:

There are generally two implementation schemes for caton monitoring:

(1) Main thread stuck monitoring. The runLoop of the main thread is monitored by the child thread to determine whether the time between the two state regions has reached a certain threshold.

(2) FPS monitoring. To maintain smooth UI interaction, the App refresh rate should be as high as 60fps. The implementation principle of FPS monitoring has been discussed above and is skipped here.

In the practice of using FPS to monitor performance, it is found that FPS value jitter is large, which makes it difficult to detect stutter. To solve this problem, the main thread is monitored by detecting the time it takes to execute the message loop each time. When this time is greater than the specified threshold, it is recorded as the occurrence of a delay.

This is also the Hertz performance monitoring scheme adopted by Meituan’s mobile terminal. The wechat team also put forward a similar scheme in practice — wechat Reading iOS Performance Optimization Summary.

Flow chart of Meituan Hertz scheme

The scheme is proposed that Sources events or other interactive events triggered by rolling are always executed quickly, and then enter the kCFRunLoopBeforeWaiting state. If a lag occurs during scrolling, the RunLoop must remain in one of two states: kCFRunLoopAfterWaiting or kCFRunLoopBeforeSources.

Therefore, the first scheme to monitor the main thread is:

Start a child thread and calculate whether the time between kCFRunLoopBeforeSources and kCFRunLoopAfterWaiting exceeds a certain threshold in real time to determine whether the main thread is running late.

However, since the main thread RunLoop is basically in a Before Waiting state when idle, this detection method can always identify the main thread as being stalled even if nothing has occurred.

In order to solve this problem, Hanshen (Nanzhi Qinhan) has given its own solution to the problem by detecting the third party ANREye. The general idea of this set of stuck monitoring scheme is: create a child thread for cyclic detection, set the flag bit to YES each time, and then send tasks to the main thread to set the flag bit to NO. Then the child thread sleeps the timeout threshold and determines whether the flag bit is successfully set to NO. If it does not indicate that the main thread has stalled.

Combined with this scheme, when the main thread is in the Before Waiting state, it can handle the stacken detection under normal conditions by sending tasks to the main thread to set the marker bits:

#define lsl_SEMAPHORE_SUCCESS 0
static BOOL lsl_is_monitoring = NO;
static dispatch_semaphore_t lsl_semaphore;
static NSTimeInterval lsl_time_out_interval = 0.05;


@implementation LSLAppFluencyMonitor

static inline dispatch_queue_t __lsl_fluecy_monitor_queue() {
    static dispatch_queue_t lsl_fluecy_monitor_queue;
    static dispatch_once_t once;
    dispatch_once(&once, ^{
        lsl_fluecy_monitor_queue = dispatch_queue_create("com.dream.lsl_monitor_queue", NULL);
    });
    return lsl_fluecy_monitor_queue;
}

static inline void __lsl_monitor_init() {
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        lsl_semaphore = dispatch_semaphore_create(0);
    });
}

#pragma mark - Public
+ (instancetype)monitor {
    return [LSLAppFluencyMonitor new];
}

- (void)startMonitoring {
    if (lsl_is_monitoring) { return; }
    lsl_is_monitoring = YES;
    __lsl_monitor_init();
    dispatch_async(__lsl_fluecy_monitor_queue(), ^{
        while (lsl_is_monitoring) {
            __block BOOL timeOut = YES;
            dispatch_async(dispatch_get_main_queue(), ^{
                timeOut = NO;
                dispatch_semaphore_signal(lsl_semaphore);
            });
            [NSThread sleepForTimeInterval: lsl_time_out_interval];
            if(timeOut) { [LSLBacktraceLogger lsl_logMain]; // [LSLBacktraceLogger lsl_logCurrent]; // [LSLBacktraceLogger lsl_logAllThread]; } dispatch_wait(lsl_semaphore, DISPATCH_TIME_FOREVER); }}); } - (void)stopMonitoring {if(! lsl_is_monitoring) {return; } lsl_is_monitoring = NO; } @end where LSLBacktraceLogger is the class to get stack information, see Github code for details. The following information is displayed: 2018-08-16 12:36:33.910491+0800 AppPerformance[4802:171145] Backtrace of Thread 771: ====================================================================================== libsystem_kernel.dylib 0x10d089bce __semwait_signal + 10 libsystem_c.dylib 0x10ce55d10 usleep + 53 AppPerformance 0x108b8b478$S14AppPerformance25LSLFPSTableViewControllerC05tableD0_12cellForRowAtSo07UITableD4CellCSo0kD0C_10Foundation9IndexPathVt F + 1144
AppPerformance                 0x108b8b60b $S14AppPerformance25LSLFPSTableViewControllerC05tableD0_12cellForRowAtSo07UITableD4CellCSo0kD0C_10Foundation9IndexPathVt FTo + 155
UIKitCore                      0x1135b104f -[_UIFilteredDataSource tableView:cellForRowAtIndexPath:] + 95
UIKitCore                      0x1131ed34d -[UITableView _createPreparedCellForGlobalRow:withIndexPath:willDisplay:] + 765
UIKitCore                      0x1131ed8da -[UITableView _createPreparedCellForGlobalRow:willDisplay:] + 73
UIKitCore                      0x1131b4b1e -[UITableView _updateVisibleCellsNow:isRecursive:] + 2863
UIKitCore                      0x1131d57eb -[UITableView layoutSubviews] + 165
UIKitCore                      0x1133921ee -[UIView(CALayerDelegate) layoutSublayersOfLayer:] + 1501
QuartzCore                     0x10ab72eb1 -[CALayer layoutSublayers] + 175
QuartzCore                     0x10ab77d8b _ZN2CA5Layer16layout_if_neededEPNS_11TransactionE + 395
QuartzCore                     0x10aaf3b45 _ZN2CA7Context18commit_transactionEPNS_11TransactionE + 349
QuartzCore                     0x10ab285b0 _ZN2CA11Transaction6commitEv + 576
QuartzCore                     0x10ab29374 _ZN2CA11Transaction17observer_callbackEP19__CFRunLoopObservermPv + 76
CoreFoundation                 0x109dc3757 __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 23
CoreFoundation                 0x109dbdbde __CFRunLoopDoObservers + 430
CoreFoundation                 0x109dbe271 __CFRunLoopRun + 1537
CoreFoundation                 0x109dbd931 CFRunLoopRunSpecific + 625
GraphicsServices               0x10f5981b5 GSEventRunModal + 62
UIKitCore                      0x112c812ce UIApplicationMain + 140
AppPerformance                 0x108b8c1f0 main + 224
libdyld.dylib                  0x10cd4dc9d start + 1
Copy the code

Scheme two is combined with CADisplayLink

CADisplayLink can also be used to monitor if the FPS value continuously drops below a certain value.