Interface optimization is nothing more than to solve the problem of lag, optimize the interface fluency, the following by analyzing the reasons for lag, and then introduce the specific optimization scheme, to analyze how to do interface optimization

  • Interface rendering process

    The specific process can be referred toPreliminary study on picture renderingHere is about the picture rendering process, generally can be divided into three stagesCPUProcessing phaseGPUProcessing phase and video controller display phase.

    1. CPUIt is mainly to calculate the model data that needs to be rendered
    2. GPUMainly based onCPUThe render model data provided renders the image and then saves it to the frame buffer
    3. The video controller reads the data in the frame buffer for final imaging

    The general flow chart is as follows:



    Apple used it to solve the problem of pictures tearingVSyncIn the + double-buffer format, the display sends a vertical signal when it has rendered a frameVSyncAfter receiving this vertical signal, the display begins to read data from another frame bufferAppAfter receiving the vertical signal, start the rendering of a new frame.

  • Caton principle

    According to the interface rendering process of zhang above, a vertical signal will be sent to read the data in the buffer of another frame after the rendering of the first frame is completedCPUandGPUWork is not yet complete, that is, another frame buffer was locked state no data at this time display shows or on a frame of image then this situation will have been waiting for the next frame mapped then video controller to read another imaging data of the frame buffer and then, in the middle of the waiting process creates a drop frames, namely caton.

    Caton’s diagram is as follows:

    This can lead to a lag

  • Caton detection

    1. FPS monitoring

      Apple’s recommended refresh rate for the iPhone is60HzThat’s 60 screen refreshes per second. That’s 60 render frames per second. The render time per frame is approximately1000/60 = 16.67Ms the entire interface will be relatively smooth, generally lower refresh rate45HzThere will be an obvious phenomenon of card. You can go through hereYYFPSLabelTo implement theFPSThe monitoring principle is mainly relied onCADisplayLinkByCADisplayLinkTo listen for each screen refresh and get the screen refresh time, then the number of uses (that is, 1) divided by the interval of each refreshFPS, the specific source code is as follows:

      #import "YYFPSLabel.h" #import "YYKit.h" #define kSize CGSizeMake(55, 20) @implementation YYFPSLabel { CADisplayLink *_link; NSUInteger _count; NSTimeInterval _lastTime; UIFont *_font; UIFont *_subFont; NSTimeInterval _llll; } - (instancetype)initWithFrame:(CGRect)frame { if (frame.size.width == 0 && frame.size.height == 0) { frame.size = kSize; } self = [super initWithFrame:frame]; self.layer.cornerRadius = 5; self.clipsToBounds = YES; self.textAlignment = NSTextAlignmentCenter; self.userInteractionEnabled = NO; Self. BackgroundColor = [UIColor colorWithWhite:0.000 alpha:0.700]; _font = [UIFont fontWithName:@"Menlo" size:14]; if (_font) { _subFont = [UIFont fontWithName:@"Menlo" size:4]; } else { _font = [UIFont fontWithName:@"Courier" size:14]; _subFont = [UIFont fontWithName:@"Courier" size:4]; _link = [CADisplayLink displayLinkWithTarget:[YYWeakProxy proxyWithTarget:self] selector:@selector(tick:)]; [_link addToRunLoop:[NSRunLoop mainRunLoop] forMode:NSRunLoopCommonModes]; return self; } - (void)dealloc { [_link invalidate]; } - (CGSize)sizeThatFits:(CGSize)size { return kSize; } - (void)tick:(CADisplayLink *)link { if (_lastTime == 0) { _lastTime = link.timestamp; NSLog(@"sdf"); return; } // count _count++; NSTimeInterval delta = link.timestamp -_lasttime; if (delta < 1) return; _lastTime = link.timestamp; float fps = _count / delta; _count = 0; CGFloat progress = FPS / 60.0; Color = [UIColor colorWithHue:0.27 * (Progress - 0.2) saturation:1 brightness:0.9 alpha:1]; NSMutableAttributedString *text = [[NSMutableAttributedString alloc] initWithString:[NSString stringWithFormat:@"%d FPS",(int)round(fps)]]; [text setColor:color range:NSMakeRange(0, text.length - 3)]; [text setColor:[UIColor whiteColor] range:NSMakeRange(text.length - 3, 3)]; text.font = _font; [text setFont:_subFont range:NSMakeRange(text.length - 4, 1)]; self.attributedText = text; } @endCopy the code

      FPSUse only a number that is auxiliary during the development phase, because it will awaken frequentlyrunloopifrunloopIn the idle state byCADisplayLinkWaking up is a performance drain.

    2. Use RunLoop to detect lag

      By listening on the main threadRunloopThe time of a cycle to determine if there is a lag, this needs to be used togetherGCDSet the initialization semaphore to 0, and then open a child thread waiting for the semaphore to fire, which is called in the child thread’s methoddispatch_semaphore_waitMethod to set the wait time to 1 second and then the main threadRunlooptheObserverA signal sent in a callback method is calleddispatch_semaphore_signalMethod, in this case the time can be set to 0, if the wait time timeout, look at the timeRunloopIs the state ofkCFRunLoopBeforeSourcesOr is itkCFRunLoopAfterWaiting, if two seconds in either state, it means there is a lag. The detailed code is as follows :(there are also related comments in the code)

      #import "LGBlockMonitor.h" @interface LGBlockMonitor (){ CFRunLoopActivity activity; } @property (nonatomic, strong) dispatch_semaphore_t semaphore; @property (nonatomic, assign) NSUInteger timeoutCount; @end @implementation LGBlockMonitor + (instancetype)sharedInstance { static id instance = nil; static dispatch_once_t onceToken; dispatch_once(&onceToken, ^{ instance = [[self alloc] init]; }); return instance; } - (void)start{ [self registerObserver]; [self startMonitor]; } static void CallBack(CFRunLoopObserverRef observer, CFRunLoopActivity activity, void *info) { LGBlockMonitor *monitor = (__bridge LGBlockMonitor *)info; monitor->activity = activity; // Dispatch_semaphore_t semaphore = monitor->_semaphore; dispatch_semaphore_signal(semaphore); } - (void)registerObserver{ CFRunLoopObserverContext context = {0,(__bridge void*)self,NULL,NULL}; //NSIntegerMax : CFRunLoopObserverRef observer = CFRunLoopObserverCreate(kCFAllocatorDefault, kCFRunLoopAllActivities, YES, NSIntegerMax, &CallBack, &context); CFRunLoopAddObserver(CFRunLoopGetMain(), observer, kCFRunLoopCommonModes); } - (void)startMonitor{// create signal c _semaphore = dispatch_semaphore_create(0); * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * All tasks in RunLoop // do not receive a signal, the bottom layer will first subtract the semaphore, and then the semaphore will become negative // so start to enter and wait, When the wait time has reached and no signal has been received, then we will add and restore the semaphore. // We will execute the method of entering the wait. Dispatch_semaphore_wait will return a non-zero number. // When the signal is received, the semaphore is 1 and the bottom layer is subtracted. Long st = dispatch_semaphore_wait(self->_semaphore, dispatch_time(DISPATCH_TIME_NOW, 1 * NSEC_PER_SEC)); if (st ! = 0) { if (self->activity == kCFRunLoopBeforeSources || self->activity == kCFRunLoopAfterWaiting) { If (++self->_timeoutCount < 2){if (++self->_timeoutCount < 2){ NSLog(@"timeoutCount==%lu",(unsigned long)self->_timeoutCount); continue; } // If it is more than two seconds, it is possible to keep the scale of a second or so to avoid large-scale printing! NSLog(@" detected more than two consecutive pauses "); } } self->_timeoutCount = 0; }}); } @endCopy the code
    3. WeChat matrix

      This scheme is also aided byrunloopThe general process of implementation is the same as scheme three, but wechat adds stack analysis, which can locate the time-consuming method call stack. Therefore, it is necessary to accurately analyze the cause of the lag, and the lag can be analyzed by wechat matrix. It can also be used in Scenario 2PLCrashReporterThis is an open source third-party library for getting stack information

    4. Drops DoraemonKit

      And the way to do that is basically in the child thread all the timepingThe main thread, in the case of the main thread stuck, will appear broken in the performance of no response, and then detect the stuck

  • Optimization scheme

    The reason for the analysis of Catton above, we know, is mainly inCPUandGPUThe phase takes too long and causes frame lag, so the main work of interface optimization is toCPUandGPU

    • Preliminary layout

      Pre-typesetting is mainly rightCPUTo reduce the load.

      Let’s say I have anotherTableViewWhich needs to be based on eachcellTo determine the contentcellThe height of the. We know thatTableViewThere is a reuse mechanism, if there is data in the reuse pool, it will slide to the screencellWill use the ones in the reuse poolcell, to save resources, but still according to the content of the new data to calculatecellThe height of the new relayoutcellThe layout of the content in this slide repeatedlyTableViewThe samecellI’m going to calculate it over and over againframe“And so onCPUIt’s a burden. If you get the data when you create the modelcell frameWork out,TableViewReturn to theframeSo the same thingcellEven if you slide back and forthTableViewTo calculate theframeThis operation will only be performed once, so it can reduce the load function, as shown in the figure below: onecellComposition needsmodalFind the data, also needlayoutTo find thecellHow to layout:

    • Pre-decode & pre-render

      Picture rendering flow, inCPUAfter the phase gets the vertex data and texture of the image, it decodes and produces a bitmap, which is then passed toGPUThe main flow chart for rendering is as followsIf the picture is very large, the decoding work will take up the main threadRunLoopYou can’t do anything else like slide, and you can get stuck, so you can put the decoding in an asynchronous thread and it doesn’t take up the main thread, and you might want to just put the image load in an asynchronous thread and generate one in an asynchronous threadUIImageOr is itCGImageAnd then in the main thread set toUIImageViewAt this point you can write a piece of code to useinstrumentstheTime ProfilerLook at the stack informationFound the image codec is still in the main thread. A common approach to this problem is to draw the image in the child thread toCGBitmapContextAnd then fromBitmapCreate images directly, for exampleSDWebImageImage codec processing in tripartite framework. This is theImage, the code is as follows:

          dispatch_async(queue, ^{
           CGImageRef cgImage = [UIImage imageWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:self]]].CGImage;
           CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(cgImage) & kCGBitmapAlphaInfoMask;
      
           BOOL hasAlpha = NO;
           if (alphaInfo == kCGImageAlphaPremultipliedLast ||
               alphaInfo == kCGImageAlphaPremultipliedFirst ||
               alphaInfo == kCGImageAlphaLast ||
               alphaInfo == kCGImageAlphaFirst) {
               hasAlpha = YES;
           }
      
           CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Host;
           bitmapInfo |= hasAlpha ? kCGImageAlphaPremultipliedFirst : kCGImageAlphaNoneSkipFirst;
      
           size_t width = CGImageGetWidth(cgImage);
           size_t height = CGImageGetHeight(cgImage);
      
           CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8, 0, CGColorSpaceCreateDeviceRGB(), bitmapInfo);
           CGContextDrawImage(context, CGRectMake(0, 0, width, height), cgImage);
           cgImage = CGBitmapContextCreateImage(context);
      
           UIImage * image = [[UIImage imageWithCGImage:cgImage] cornerRadius:width * 0.5];
           CGContextRelease(context);
           CGImageRelease(cgImage);
           completion(image);
       });
      Copy the code
    • According to the need to load

      As the name implies, the load that needs to be displayed comes out, and the load that does not need to be displayed, for exampleTableViewThe image is not loaded when the slide stops, but when the slide stopsRunloop, picture drawing SettingsdefaultModal)

    • Asynchronous rendering

      Before we talk about asynchronous renderingUIViewandCALayerThe relationship between:

      1. UIViewIs based onUIKitFrame, can accept click events, handle user touch events, and manage subviews
      2. CALayerIs based onCoreAnimationAnd theCoreAnimationIs based onQuartzCode. soCALayerDisplay only, not handle user touch events
      3. UIViewIt’s direct inheritanceUIResponderThe,CALayerIs inheritedNSObjectthe
      4. UIVIewIs responsible for receiving and responding to events; whileCALayerIs responsible for display UI.UIViewDepends on theCALayerTo show that

      Conclusion:UIViewMainly responsible for time processing,CALayerThe principle of asynchronous rendering is to draw all the views in the child thread into a bitmap, and then go back to the main thread and assign values tolayerthecontents, e.g.GraverThe asynchronous rendering process of the framework is as follows:

      The core source code is as follows:

      if (drawingFinished && targetDrawingCount == layer.drawingCount) { CGImageRef CGImage = context ? CGBitmapContextCreateImage(context) : NULL; UIImage *image = CGImage? UIImage *image = CGImage? [UIImage imageWithCGImage:CGImage] : nil; Void (^finishBlock)(void) = ^{// If (targetDrawingCount! = layer.drawingCount) { failedBlock(); return; } layer.contents = (id) image.cgImage; / /... } if (drawInBackground) dispatch_async(dispatch_get_main_queue(), finishBlock); else finishBlock(); {release CGImageRef, Image context ending}Copy the code

      The final renderings are as follows:

      You can also useYYAsyncLayerand

    • other
      1. Reduce the level of layers
      2. Reduce off-screen rendering
      3. Set the size of the image for display (not too large)
      4. Use lessaddViewcellDynamically addview
      5. Try to avoid using transparencyview, because it uses transparencyview, can lead to theGPUWill be transparent when the pixel is calculated inviewThe lower layer of the pixel is calculated in that color mixing processing (when there are two layers are a translucent is an opaque if translucent levels higher at this time will trigger the mix color, at the bottom of the blend is not only the two layer stack but will calculate the new color values shows two colors together on the screen)