Interface optimization is nothing more than to solve the problem of lag, optimize the interface fluency, the following by analyzing the reasons for lag, and then introduce the specific optimization scheme, to analyze how to do interface optimization
-
Interface rendering process
The specific process can be referred toPreliminary study on picture renderingHere is about the picture rendering process, generally can be divided into three stages
CPU
Processing phaseGPU
Processing phase and video controller display phase.CPU
It is mainly to calculate the model data that needs to be renderedGPU
Mainly based onCPU
The render model data provided renders the image and then saves it to the frame buffer- The video controller reads the data in the frame buffer for final imaging
The general flow chart is as follows:
Apple used it to solve the problem of pictures tearingVSync
In the + double-buffer format, the display sends a vertical signal when it has rendered a frameVSync
After receiving this vertical signal, the display begins to read data from another frame bufferApp
After receiving the vertical signal, start the rendering of a new frame. -
Caton principle
According to the interface rendering process of zhang above, a vertical signal will be sent to read the data in the buffer of another frame after the rendering of the first frame is completed
CPU
andGPU
Work is not yet complete, that is, another frame buffer was locked state no data at this time display shows or on a frame of image then this situation will have been waiting for the next frame mapped then video controller to read another imaging data of the frame buffer and then, in the middle of the waiting process creates a drop frames, namely caton.
Caton’s diagram is as follows:
This can lead to a lag -
Caton detection
-
FPS monitoring
Apple’s recommended refresh rate for the iPhone is
60Hz
That’s 60 screen refreshes per second. That’s 60 render frames per second. The render time per frame is approximately1000/60 = 16.67
Ms the entire interface will be relatively smooth, generally lower refresh rate45Hz
There will be an obvious phenomenon of card. You can go through hereYYFPSLabel
To implement theFPS
The monitoring principle is mainly relied onCADisplayLink
ByCADisplayLink
To listen for each screen refresh and get the screen refresh time, then the number of uses (that is, 1) divided by the interval of each refreshFPS
, the specific source code is as follows:#import "YYFPSLabel.h" #import "YYKit.h" #define kSize CGSizeMake(55, 20) @implementation YYFPSLabel { CADisplayLink *_link; NSUInteger _count; NSTimeInterval _lastTime; UIFont *_font; UIFont *_subFont; NSTimeInterval _llll; } - (instancetype)initWithFrame:(CGRect)frame { if (frame.size.width == 0 && frame.size.height == 0) { frame.size = kSize; } self = [super initWithFrame:frame]; self.layer.cornerRadius = 5; self.clipsToBounds = YES; self.textAlignment = NSTextAlignmentCenter; self.userInteractionEnabled = NO; Self. BackgroundColor = [UIColor colorWithWhite:0.000 alpha:0.700]; _font = [UIFont fontWithName:@"Menlo" size:14]; if (_font) { _subFont = [UIFont fontWithName:@"Menlo" size:4]; } else { _font = [UIFont fontWithName:@"Courier" size:14]; _subFont = [UIFont fontWithName:@"Courier" size:4]; _link = [CADisplayLink displayLinkWithTarget:[YYWeakProxy proxyWithTarget:self] selector:@selector(tick:)]; [_link addToRunLoop:[NSRunLoop mainRunLoop] forMode:NSRunLoopCommonModes]; return self; } - (void)dealloc { [_link invalidate]; } - (CGSize)sizeThatFits:(CGSize)size { return kSize; } - (void)tick:(CADisplayLink *)link { if (_lastTime == 0) { _lastTime = link.timestamp; NSLog(@"sdf"); return; } // count _count++; NSTimeInterval delta = link.timestamp -_lasttime; if (delta < 1) return; _lastTime = link.timestamp; float fps = _count / delta; _count = 0; CGFloat progress = FPS / 60.0; Color = [UIColor colorWithHue:0.27 * (Progress - 0.2) saturation:1 brightness:0.9 alpha:1]; NSMutableAttributedString *text = [[NSMutableAttributedString alloc] initWithString:[NSString stringWithFormat:@"%d FPS",(int)round(fps)]]; [text setColor:color range:NSMakeRange(0, text.length - 3)]; [text setColor:[UIColor whiteColor] range:NSMakeRange(text.length - 3, 3)]; text.font = _font; [text setFont:_subFont range:NSMakeRange(text.length - 4, 1)]; self.attributedText = text; } @endCopy the code
FPS
Use only a number that is auxiliary during the development phase, because it will awaken frequentlyrunloop
ifrunloop
In the idle state byCADisplayLink
Waking up is a performance drain. -
Use RunLoop to detect lag
By listening on the main thread
Runloop
The time of a cycle to determine if there is a lag, this needs to be used togetherGCD
Set the initialization semaphore to 0, and then open a child thread waiting for the semaphore to fire, which is called in the child thread’s methoddispatch_semaphore_wait
Method to set the wait time to 1 second and then the main threadRunloop
theObserver
A signal sent in a callback method is calleddispatch_semaphore_signal
Method, in this case the time can be set to 0, if the wait time timeout, look at the timeRunloop
Is the state ofkCFRunLoopBeforeSources
Or is itkCFRunLoopAfterWaiting
, if two seconds in either state, it means there is a lag. The detailed code is as follows :(there are also related comments in the code)#import "LGBlockMonitor.h" @interface LGBlockMonitor (){ CFRunLoopActivity activity; } @property (nonatomic, strong) dispatch_semaphore_t semaphore; @property (nonatomic, assign) NSUInteger timeoutCount; @end @implementation LGBlockMonitor + (instancetype)sharedInstance { static id instance = nil; static dispatch_once_t onceToken; dispatch_once(&onceToken, ^{ instance = [[self alloc] init]; }); return instance; } - (void)start{ [self registerObserver]; [self startMonitor]; } static void CallBack(CFRunLoopObserverRef observer, CFRunLoopActivity activity, void *info) { LGBlockMonitor *monitor = (__bridge LGBlockMonitor *)info; monitor->activity = activity; // Dispatch_semaphore_t semaphore = monitor->_semaphore; dispatch_semaphore_signal(semaphore); } - (void)registerObserver{ CFRunLoopObserverContext context = {0,(__bridge void*)self,NULL,NULL}; //NSIntegerMax : CFRunLoopObserverRef observer = CFRunLoopObserverCreate(kCFAllocatorDefault, kCFRunLoopAllActivities, YES, NSIntegerMax, &CallBack, &context); CFRunLoopAddObserver(CFRunLoopGetMain(), observer, kCFRunLoopCommonModes); } - (void)startMonitor{// create signal c _semaphore = dispatch_semaphore_create(0); * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * All tasks in RunLoop // do not receive a signal, the bottom layer will first subtract the semaphore, and then the semaphore will become negative // so start to enter and wait, When the wait time has reached and no signal has been received, then we will add and restore the semaphore. // We will execute the method of entering the wait. Dispatch_semaphore_wait will return a non-zero number. // When the signal is received, the semaphore is 1 and the bottom layer is subtracted. Long st = dispatch_semaphore_wait(self->_semaphore, dispatch_time(DISPATCH_TIME_NOW, 1 * NSEC_PER_SEC)); if (st ! = 0) { if (self->activity == kCFRunLoopBeforeSources || self->activity == kCFRunLoopAfterWaiting) { If (++self->_timeoutCount < 2){if (++self->_timeoutCount < 2){ NSLog(@"timeoutCount==%lu",(unsigned long)self->_timeoutCount); continue; } // If it is more than two seconds, it is possible to keep the scale of a second or so to avoid large-scale printing! NSLog(@" detected more than two consecutive pauses "); } } self->_timeoutCount = 0; }}); } @endCopy the code
-
WeChat matrix
This scheme is also aided by
runloop
The general process of implementation is the same as scheme three, but wechat adds stack analysis, which can locate the time-consuming method call stack. Therefore, it is necessary to accurately analyze the cause of the lag, and the lag can be analyzed by wechat matrix. It can also be used in Scenario 2PLCrashReporter
This is an open source third-party library for getting stack information -
Drops DoraemonKit
And the way to do that is basically in the child thread all the time
ping
The main thread, in the case of the main thread stuck, will appear broken in the performance of no response, and then detect the stuck
-
-
Optimization scheme
The reason for the analysis of Catton above, we know, is mainly in
CPU
andGPU
The phase takes too long and causes frame lag, so the main work of interface optimization is toCPU
andGPU
—-
Preliminary layout
Pre-typesetting is mainly right
CPU
To reduce the load.
Let’s say I have anotherTableView
Which needs to be based on eachcell
To determine the contentcell
The height of the. We know thatTableView
There is a reuse mechanism, if there is data in the reuse pool, it will slide to the screencell
Will use the ones in the reuse poolcell
, to save resources, but still according to the content of the new data to calculatecell
The height of the new relayoutcell
The layout of the content in this slide repeatedlyTableView
The samecell
I’m going to calculate it over and over againframe
“And so onCPU
It’s a burden. If you get the data when you create the modelcell
frame
Work out,TableView
Return to theframe
So the same thingcell
Even if you slide back and forthTableView
To calculate theframe
This operation will only be performed once, so it can reduce the load function, as shown in the figure below: onecell
Composition needsmodal
Find the data, also needlayout
To find thecell
How to layout: -
Pre-decode & pre-render
Picture rendering flow, in
CPU
After the phase gets the vertex data and texture of the image, it decodes and produces a bitmap, which is then passed toGPU
The main flow chart for rendering is as followsIf the picture is very large, the decoding work will take up the main threadRunLoop
You can’t do anything else like slide, and you can get stuck, so you can put the decoding in an asynchronous thread and it doesn’t take up the main thread, and you might want to just put the image load in an asynchronous thread and generate one in an asynchronous threadUIImage
Or is itCGImage
And then in the main thread set toUIImageView
At this point you can write a piece of code to useinstruments
theTime Profiler
Look at the stack informationFound the image codec is still in the main thread. A common approach to this problem is to draw the image in the child thread toCGBitmapContext
And then fromBitmap
Create images directly, for exampleSDWebImage
Image codec processing in tripartite framework. This is theImage
, the code is as follows:dispatch_async(queue, ^{ CGImageRef cgImage = [UIImage imageWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:self]]].CGImage; CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(cgImage) & kCGBitmapAlphaInfoMask; BOOL hasAlpha = NO; if (alphaInfo == kCGImageAlphaPremultipliedLast || alphaInfo == kCGImageAlphaPremultipliedFirst || alphaInfo == kCGImageAlphaLast || alphaInfo == kCGImageAlphaFirst) { hasAlpha = YES; } CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Host; bitmapInfo |= hasAlpha ? kCGImageAlphaPremultipliedFirst : kCGImageAlphaNoneSkipFirst; size_t width = CGImageGetWidth(cgImage); size_t height = CGImageGetHeight(cgImage); CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8, 0, CGColorSpaceCreateDeviceRGB(), bitmapInfo); CGContextDrawImage(context, CGRectMake(0, 0, width, height), cgImage); cgImage = CGBitmapContextCreateImage(context); UIImage * image = [[UIImage imageWithCGImage:cgImage] cornerRadius:width * 0.5]; CGContextRelease(context); CGImageRelease(cgImage); completion(image); }); Copy the code
-
According to the need to load
As the name implies, the load that needs to be displayed comes out, and the load that does not need to be displayed, for example
TableView
The image is not loaded when the slide stops, but when the slide stopsRunloop
, picture drawing SettingsdefaultModal
) -
Asynchronous rendering
Before we talk about asynchronous rendering
UIView
andCALayer
The relationship between:UIView
Is based onUIKit
Frame, can accept click events, handle user touch events, and manage subviewsCALayer
Is based onCoreAnimation
And theCoreAnimation
Is based onQuartzCode
. soCALayer
Display only, not handle user touch eventsUIView
It’s direct inheritanceUIResponder
The,CALayer
Is inheritedNSObject
theUIVIew
Is responsible for receiving and responding to events; whileCALayer
Is responsible for displayUI
.UIView
Depends on theCALayer
To show that
Conclusion:
UIView
Mainly responsible for time processing,CALayer
The principle of asynchronous rendering is to draw all the views in the child thread into a bitmap, and then go back to the main thread and assign values tolayer
thecontents
, e.g.Graver
The asynchronous rendering process of the framework is as follows:
The core source code is as follows:if (drawingFinished && targetDrawingCount == layer.drawingCount) { CGImageRef CGImage = context ? CGBitmapContextCreateImage(context) : NULL; UIImage *image = CGImage? UIImage *image = CGImage? [UIImage imageWithCGImage:CGImage] : nil; Void (^finishBlock)(void) = ^{// If (targetDrawingCount! = layer.drawingCount) { failedBlock(); return; } layer.contents = (id) image.cgImage; / /... } if (drawInBackground) dispatch_async(dispatch_get_main_queue(), finishBlock); else finishBlock(); {release CGImageRef, Image context ending}Copy the code
The final renderings are as follows:
You can also useYYAsyncLayer
and -
other
- Reduce the level of layers
- Reduce off-screen rendering
- Set the size of the image for display (not too large)
- Use less
addView
给cell
Dynamically addview
- Try to avoid using transparency
view
, because it uses transparencyview
, can lead to theGPU
Will be transparent when the pixel is calculated inview
The lower layer of the pixel is calculated in that color mixing processing (when there are two layers are a translucent is an opaque if translucent levels higher at this time will trigger the mix color, at the bottom of the blend is not only the two layer stack but will calculate the new color values shows two colors together on the screen)
-