YYKit series source code analysis article:

  • YYText source code analysis: CoreText and asynchronous drawing
  • YYModel source code analysis: focus on performance
  • YYCache source code analysis: highlights
  • YYImage source code analysis: image processing skills
  • YYAsyncLayer source code analysis: asynchronous drawing
  • YYWebImage source code analysis: thread processing and caching strategy

The introduction

First of all, can you use pictures?

Pictures are widely used in modern APP interface design. Exquisite pictures can bring visual enjoyment and improve user experience. This brings some technical challenges, such as the processing of giFs, the optimization of the smoothness of picture display, the optimization of the size of picture package, and the processing of oversized pictures.

This article is mainly combined with YYImage source code to explain the image processing skills. The author will not translate the source code word for word, mainly extract the source code has thinking value. So it is best to open the source code, this article as a thought guide.

Source code based on version 1.0.4.

First, picture processing skills

First of all, let’s talk about some considerations and skills of picture processing. The following conclusions are drawn by referring to other blog posts, official documents and actual tests. Welcome to point out mistakes 😁.

The process of displaying an image from disk to screen is roughly as follows: loading image information from disk, decoding binary image data into bitmap, processing through CoreAnimation framework and finally drawing on screen.

In fact, the drawing process of the image is often not a performance bottleneck, the most time-consuming operation is the decoding process, if the image file is too large, the process of reading from disk has considerable time.

1. Load and unpack

ImageNamed: or imageWithData: loads the image from memory to generate an instance of UIImage. The image is not decompressed at this point, but only when RunLoop is ready to handle the image display transaction (CATransaction). This decompression process is in the main thread, which is an important factor leading to stalling.

Method of imageNamed:

When using imageNamed: method to load image information (generate UIImage instance), image information is also cached, so when using this method to load an image for the first time, it takes a lot of time, and then it is very fast to load the image again (note that the image is not drawn on the screen at this time, That is, not decompressed yet.

Before drawing to the screen, the system caches the decompression information to the memory after the first successful decompression.

It is worth noting that these caches are global and will not be cleared due to the release of the current UIImage instance. They may be cleared only after receiving a memory warning or the APP enters the background for the first time. The timing and content of this clearing are determined by the system and we cannot interfere with it.

Method of imageWithData:

When loading an image using imageWithData:, neither the loading process nor the decompression process is cached globally like imageNamed:. When the UIImage instance is released, the associated image information and decompression information are destroyed.

The difference between the two loading modes

From the above analysis, imageNamed: generates global memory footprint when used, but performs well when using the same image the second time; ImageWithData: Has no global footprint, but “starts from scratch” each time the same image is loaded and unpacked.

Thus, imageNamed: suitable for “small” and “frequently used” images, imageWithData: suitable for “large” and “low-frequency use” images.

2. Optimization of loading and decompression

The optimization mentioned here is not the optimization of the decompression algorithm, but the optimization based on user experience.

Load optimization

For the loading process, if the file is too large or the loading frequency affects the frame rate (for example, the list shows the large image), you can use the asynchronous loading mode to reduce the main thread pressure, the code is roughly as follows:

dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
      UIImage *image = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"testImage" ofType:@"jpeg"]]. Dispatch_async (dispatch_get_main_queue(), ^{// service}); });Copy the code
Unpack the optimization

Decompression is time-consuming and is performed on the main thread by default. Therefore, there is a common practice in the industry to decompress the binary image data into bitmap data, using CGBitmapContextCreate(…). A series of methods can do that.

This processing mode is reflected in many picture processing frameworks.

3. Processing of large images

It is worth noting that the business may need to load a large image. At this time, if the conventional way to load will take up too much memory; Also, if the image is too large (currently, the most popular iOS devices support 4096 x 4096 texture sizes), the CPU and GPU will consume extra resources to process the image during display.

So, when dealing with very large graphs, you need something special.

For example, if you want to display the full image, you can use the following method to compress the image to the targetSize:

UIGraphicsBeginImageContext(targetSize);
[originalImage drawInRect:CGRectMake(0, 0, targetSize.width, targetSize.height)];
UIImage *targetImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Copy the code

To display parts of a large graph, you can do this:

CGImageRef tmpImage = CGImageCreateWithImageInRect(originalImage, rect);
UIImage *targetImage = [UIImage imageWithCGImage: tmpImage];
CGImageRelease(tmpImage);
Copy the code

Or simply use CALayer’s contentsRect property to achieve the same effect.

Ii. Overall overview of YYImage framework

The above article talked about some principles and core ideas of image processing, as background knowledge, the following from a macro perspective to observe the design of YYImage framework, the directory structure is as follows:

YYImage.h (.m)  
YYFrameImage.h (.m)
YYSpriteSheetImage.h (.m)
YYAnimatedImageView.h (.m)
YYImageCoder.h (.m)
Copy the code

YYImage, YYFrameImage, and YYSpriteSheetImage are all image classes that inherit from UIImage. YYAnimatedImageView is derived from UIImageView and used to process the frame’s custom image classes. YYImageCoder is the encoder and decoder.

Here are the features written by README on Github:

  • Supports playback/encoding/decoding of the following types of animated images: WebP, APNG, GIF.
  • Support the following types of static image display/encoding/decoding: WebP, PNG, GIF, JPEG, JP2, TIFF, BMP, ICO, ICNS.
  • Supports progressive/progressive scan/interlaced scan decoding of the following types of images: PNG, GIF, JPEG, BMP.
  • Support multiple picture frame animation playback, support single picture Sprite sheet animation.
  • Efficient dynamic memory cache management to ensure high performance and low memory animation playback.
  • Fully compatible with UIImage and UIImageView, easy to use.
  • Retain an extensible interface to support custom animations.
  • Each class and method is well documented.

Third, YYImage class

This class extends UIImage to support image decoding in WebP, APNG, and GIF formats. To avoid global caching, the imageNamed: method is overloaded.

+ (YYImage *)imageNamed:(NSString *)name {
    ...
    NSArray *exts = ext.length > 0 ? @[ext] : @[@""The @"png"The @"jpeg"The @"jpg"The @"gif"The @"webp"The @"apng"];
    NSArray *scales = _NSBundlePreferredScales();
    for (int s = 0; s < scales.count; s++) {
        scale = ((NSNumber *)scales[s]).floatValue;
        NSString *scaledName = _NSStringByAppendingNameScale(res, scale);
        for (NSString *e in exts) {
            path = [[NSBundle mainBundle] pathForResource:scaledName ofType:e];
            if (path) break;
        }
        if (path) break; }...return [[self alloc] initWithData:data scale:scale];
}
Copy the code
  • As you can see, if no extension name is specified for the image, all supported types are iterated.
  • scalesFor the shape of a@ [@ 1, @ 2, @ 3];Array of different screensPhysical resolution/logical resolutionDifferent query priorities vary with each other.
  • Find the first one that workspathIs calledinitWithData:scale:Method initialization.

Although it is easier to use UIImage than before, you can also omit the extension for image types other than PNG, but for maximum performance, it is better to specify the extension.

Many initialization methods start with initWithData: Scale:, which initializes the semaphores (as locks), the image decoder (YYImageDecoder), and the image extracted from the first frame through the decoder. Final call initWithCGImage: scale: “orientation for instance.

Can see such a property: @ property (nonatomic) BOOL preloadAllAnimatedImageFrames; Setsetter (); setsetter (); setsetter (); setsetter (); setsetter (); setsetter ()

- (void)setPreloadAllAnimatedImageFrames:(BOOL)preloadAllAnimatedImageFrames {
    if(_preloadAllAnimatedImageFrames ! = preloadAllAnimatedImageFrames) {if(preloadAllAnimatedImageFrames && _decoder.frameCount > 0) { NSMutableArray *frames = [NSMutableArray new]; // Get all framesfor (NSUInteger i = 0, max = _decoder.frameCount; i < max; i++) {
                UIImage *img = [self animatedImageFrameAtIndex:i];
                [frames addObject:img ?: [NSNull null]];
            }
            dispatch_semaphore_wait(_preloadedLock, DISPATCH_TIME_FOREVER);
            _preloadedFrames = frames;
            dispatch_semaphore_signal(_preloadedLock);
        } else{ dispatch_semaphore_wait(_preloadedLock, DISPATCH_TIME_FOREVER); _preloadedFrames = nil; dispatch_semaphore_signal(_preloadedLock); }}}Copy the code

Mainly in the for loop, get pictures of each frame after decompression (behind the author changed the code, as for animatedImageFrameAtIndex explanation). Because it is uncompressed, this method actually consumes CPU resources, so in practice it can be called on an asynchronous thread.

It is important to note that the use of dispatch_semaphoRE_t as a thread lock is very suitable because it is mainly used to ensure that _preloadedFrames can be read and written safely.

Four, YYFrameImage class

This class is a frame animation picture class, which can configure the picture information and display duration of each frame. Images support PNG and JPEG:

- (nullable instancetype)initWithImagePaths:(NSArray<NSString *> *)paths
                             frameDurations:(NSArray<NSNumber *> *)frameDurations
                                  loopCount:(NSUInteger)loopCount;
- (nullable instancetype)initWithImageDataArray:(NSArray<NSData *> *)dataArray
                                 frameDurations:(NSArray *)frameDurations
                                      loopCount:(NSUInteger)loopCount;
Copy the code

The main initialization methods are these two, very simple, and then configure each frame of the image, through the YYAnimatedImageView carrier operation and display.

Fifth, YYSpriteSheetImage class

SpriteSheet animation, the principle can be understood as a large image with many complete small images, and then display small images in different locations at different times.

The purpose of this method is to combine the loading and decompression of multiple images into one large image, which can reduce the memory occupied by images and improve the overall decompression performance.

In fact, the framework is very simple, yyspritesheetimage.h method is as follows:

- (nullable instancetype)initWithSpriteSheetImage:(UIImage *)image
                                     contentRects:(NSArray<NSValue *> *)contentRects
                                   frameDurations:(NSArray<NSNumber *> *)frameDurations
                                        loopCount:(NSUInteger)loopCount;

@property (nonatomic, readonly) NSArray<NSValue *> *contentRects;
@property (nonatomic, readonly) NSArray<NSValue *> *frameDurations;
@property (nonatomic, readonly) NSUInteger loopCount;
Copy the code

In the initialization method, we pass in two arrays, one for the CGRect range and one for the corresponding duration.

Then, CALayer’s contentsRect property is used to dynamically read the contents of a certain range of the large map. Of course, the logic for this process is also in the YYAnimatedImageView class.

YYAnimatedImage protocol

The YYAnimatedImage protocol is a bridge between YYAnimatedImageView and YYImage, YYFrameImage, and YYSpriteSheetImage.

@protocol YYAnimatedImage <NSObject> @Required // Number of frames - (NSUInteger)animatedImageFrameCount; // Number of animation cycles - (NSUInteger)animatedImageLoopCount; / / each frame in memory size - (NSUInteger) animatedImageBytesPerFrame; / / index subscript frame images - (nullable UIImage *) animatedImageFrameAtIndex (NSUInteger) index; / / index subscript frame duration - (NSTimeInterval) animatedImageDurationAtIndex (NSUInteger) index; @ the range of optional / / index subscript frame (CGRect) - (CGRect) animatedImageContentsRectAtIndex (NSUInteger) index; @endCopy the code

Whether it’s.gif, whether it’s an array of frames, whether it’s a SpriteSheet, when we need to animate them and display them it doesn’t really care what source they came from, it’s a common logic extraction. YYImage, YYFrameImage, and YYSpriteSheetImage are all implemented in this protocol, and the specific operation can be viewed from the source code.

Among them, – (CGRect) animatedImageContentsRectAtIndex (NSUInteger) index; Is the optional method YYSpriteSheetImage data required for SpriteSheet animation, which is a special case outside of the common logic.

Using protocols to regulate common logic is a skill worth learning to make logic clearer and code more organized.

Seven, YYAnimatedImageView class

YYAnimatedImageView class through YYImage, YYFrameImage, YYSpriteSheetImage implementation of the

protocol method to get frame picture data and related information for animation display.

Its principle is such, main analysis technical detail below, contain gold quite high.

1. Initialization process

@property (nonatomic, copy) NSString *runloopMode; The default property is NSRunLoopCommonModes to ensure that the animation continues as you drag the scroll view.

This class overrides a series of methods to make them all go through a custom configuration:

- (void)setImage:(UIImage *)image {
    if (self.image == image) return;
    [self setImage:image withType:YYAnimatedImageTypeImage];
}
- (void)setHighlightedImage:(UIImage *)highlightedImage {
    if (self.highlightedImage == highlightedImage) return;
    [self setImage:highlightedImage withType:YYAnimatedImageTypeHighlightedImage]; }...Copy the code

The setImage:withType: method assigns the image data to super.image, etc. This method ends with imageChanged, which is the main initialization:

- (void)imageChanged { YYAnimatedImageType newType = [self currentImageType]; id newVisibleImage = [self imageForType:newType]; NSUInteger newImageFrameCount = 0; BOOL hasContentsRect = NO; . If the last time it was a SpriteSheet type and the current image is not, return to self.layer.contentsRect */if(! hasContentsRect && _curImageHasContentsRect) {if(! CGRectEqualToRect(self.layer.contentsRect, CGRectMake(0, 0, 1, 1)) ) { [CATransaction begin]; [CATransactionsetDisableActions:YES]; self.layer.contentsRect = CGRectMake(0, 0, 1, 1); [CATransaction commit]; } } _curImageHasContentsRect = hasContentsRect; /*2, SpriteSheet type, pass 'setContentsRect:forThe Image: 'method configures self.layer.contentsrect */if (hasContentsRect) {
        CGRect rect = [((UIImage<YYAnimatedImage> *) newVisibleImage) animatedImageContentsRectAtIndex:0];
        [self setContentsRect:rect forImage:newVisibleImage]; } /* if you have a multi-frame image, initialize the configuration required to display multi-frame animation through the 'resetAnimated' method; Then get the first picture and call 'setNeedsDisplay 'draw out */if (newImageFrameCount > 1) {
        [self resetAnimated];
        _curAnimatedImage = newVisibleImage;
        _curFrame = newVisibleImage;
        _totalLoop = _curAnimatedImage.animatedImageLoopCount;
        _totalFrameCount = _curAnimatedImage.animatedImageFrameCount;
        [self calcMaxBufferCount];
    }
    [self setNeedsDisplay];
    [self didMoved];
}
Copy the code

It is worth noting that 1 returns self.layer.contentsRect for CGRectMake(0, 0, 1, 1) using a CATransaction transaction to cancel implicit animation. (The 0.25 second implicit animation is not needed at all.)

2. Timing of animation start and end

- (void)didMoved {
    if (self.autoPlayAnimatedImage) {
        if(self.superview && self.window) {
            [self startAnimating];
        } else {
            [self stopAnimating];
        }
    }
}
- (void)didMoveToWindow {
    [super didMoveToWindow];
    [self didMoved];
}
- (void)didMoveToSuperview {
    [super didMoveToSuperview];
    [self didMoved];
}
Copy the code

Try to start or end the animation in the didMoveToWindow and didMoveToSuperview cycle methods, which automatically play and stop the animation without being called specifically inside the component. And the didMoved method says self.superView && self.window, which means YYAnimatedImageView doesn’t start the animation yet because it has a superview, it needs to be displayed on the window.

3. Decompress asynchronously

YYAnimatedImageView has a queue variable NSOperationQueue *_requestQueue;

_requestQueue = [[NSOperationQueue alloc] init];
_requestQueue.maxConcurrentOperationCount = 1;
Copy the code

You can see that the _requestQueue is a serial queue that handles decompression tasks.

_YYAnimatedImageViewFetchOperation inherited from NSOperation, rewrite the main method extract custom task. It is the associative variable _requestQueue; To use:

- (void)main {
    ...
    for (int i = 0; i < max; i++, idx++) {
        @autoreleasepool {
            ...
            if (miss) {
                UIImage *img = [_curImage animatedImageFrameAtIndex:idx];
                img = img.yy_imageByDecoded;
                if ([self isCancelled]) break; LOCK_VIEW(view->_buffer[@(idx)] = img ? img : [NSNull null]); view = nil; }}}}Copy the code

Key code, animatedImageFrameAtIndex method will be called decoding, behind yy_imageByDecoded attribute is the first double assurance, to decode the view – > _buffer [@ (independence idx)] = img is to be cached.

The author often uses if ([self isCancelled]) break(return); Return because the NSOperation task may be canceled during execution.

Use @Autoreleasepool in the for loop to avoid piling up too many local variables in the same RunLoop.

Thus, it is almost guaranteed that the decompression process will be performed in the _requestQueue serial queue without affecting the main thread.

4. Caching mechanism

YYAnimatedImageView has the following variables:

NSMutableDictionary *_buffer; ///< frame buffer
BOOL _bufferMiss; ///< whether miss frame on last opportunity
NSUInteger _maxBufferCount; ///< maximum buffer count
NSInteger _incrBufferCount; ///< current allowed buffer count (will increase by step)
Copy the code

_buffter is the buffer pool, in the main function of _YYAnimatedImageViewFetchOperation private classes have to _buffer assignment, the author also limits the maximum number of cache.

Cache limit calculation

- (void)calcMaxBufferCount {
    int64_t bytes = (int64_t)_curAnimatedImage.animatedImageBytesPerFrame;
    if(bytes == 0) bytes = 1024; int64_t total = _YYDeviceMemoryTotal(); int64_t free = _YYDeviceMemoryFree(); Max = MIN(total * 0.2, free * 0.6); max = MAX(max, BUFFER_SIZE);if (_maxBufferSize) max = max > _maxBufferSize ? _maxBufferSize : max;
    double maxBufferCount = (double)max / (double)bytes;
    if (maxBufferCount < 1) maxBufferCount = 1;
    else if (maxBufferCount > 512) maxBufferCount = 512;
    _maxBufferCount = maxBufferCount;
}
Copy the code

This method is not complicated. Multiply the total memory by 0.2 by _YYDeviceMemoryTotal(), multiply the remaining memory by 0.6 by _YYDeviceMemoryFree(), and minimize them. The minimum buffer value BUFFER_SIZE and the user-defined _maxBufferSize attribute are used to judge.

Cache cleanup timing

Two listeners are registered in the resetAnimated method:

[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(didReceiveMemoryWarning:) name:UIApplicationDidReceiveMemoryWarningNotification object:nil];
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(didEnterBackground:) name:UIApplicationDidEnterBackgroundNotification object:nil];
Copy the code

When a memory alert is received or the APP enters the background, the author trims the cache:

- (void)didEnterBackground:(NSNotification *)notification {
    [_requestQueue cancelAllOperations];
    NSNumber *next = @((_curIndex + 1) % _totalFrameCount);
    LOCK(
         NSArray * keys = _buffer.allKeys;
         for (NSNumber * key in keys) {
             if(! [key isEqualToNumber:next]) { // keep the next framefor smoothly animation
                 [_buffer removeObjectForKey:key];
             }
         }
     )//LOCK
}
Copy the code

When entering the background, clear all asynchronous decompression tasks, then calculate the subscript of the next frame, and finally remove all caches that are not the next frame to ensure the timely display of the next frame when entering the foreground.

The handling of memory warnings is similar and will not be described in detail.

5. Timer

This class uses CADisplayLink for timing tasks, showing that the system triggers every frame callback, so the default is roughly 60 times per second. The nature of CADisplayLink makes it ideal for doing frame rate-related UI logic.

Preventing circular references

_link = [CADisplayLink displayLinkWithTarget:[_YYImageWeakProxy proxyWithTarget:self] selector:@selector(step:)];
Copy the code

Here we use a _YYImageWeakProxy private class for message forwarding to prevent circular reference, look at _YYImageWeakProxy core code:

@interface _YYImageWeakProxy : NSProxy
@property (nonatomic, weak, readonly) id target; . @end ... - (id)forwardingTargetForSelector:(SEL)selector {return _target;
}
- (void)forwardInvocation:(NSInvocation *)invocation {
    void *null = NULL;
    [invocation setReturnValue:&null];
}
- (NSMethodSignature *)methodSignatureForSelector:(SEL)selector {
    return[NSObject instanceMethodSignatureForSelector:@selector(init)]; },,,Copy the code

When target exists, the method sent to _YYImageWeakProxy instance can be forwarded to Target normally.

ForwardingTargetForSelector: when the target release redirection fails, will call methodSignatureForSelector: try to get effective way, and if the access method is invalid, will throw an exception, so here just returned to the init method.

When methodSignatureForSelector: after the access to an effective method will be called forwardInvocation: method started forward. The author gives the [Invocation setReturnValue:& NULL]; An empty return value so that the outermost method caller does not get an uncontrollable return value. Although null is returned by default if the method is not called here, to be on the safe side, do not use system control if you can artificially control the default values.

Timing task

Timer callback method – (void)step:(CADisplayLink *)link {… } is the core code to call animation, in fact, the code is relatively easy to understand, mainly to display the current frame image, initiate the next frame decompression task, etc..

YYImageCoder codec

The file mainly contains YYImageFrame image frame information class, YYImageDecoder decoder, YYImageEncoder coder.

Note that this article will not discuss WebP/APNG image decompression algorithms, but will mainly illustrate some ImageIO based use.

1. Decode the core code

CGImageRef YYCGImageCreateDecodedCopy(CGImageRef imageRef, BOOL decodeForDisplay) {
    ...
        CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef) & kCGBitmapAlphaInfoMask;
        BOOL hasAlpha = NO;
        if (alphaInfo == kCGImageAlphaPremultipliedLast ||
            alphaInfo == kCGImageAlphaPremultipliedFirst ||
            alphaInfo == kCGImageAlphaLast ||
            alphaInfo == kCGImageAlphaFirst) {
            hasAlpha = YES;
        }
        // BGRA8888 (premultiplied) or BGRX8888
        // same as UIGraphicsBeginImageContext() and -[UIView drawRect:]
        CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Host;
        bitmapInfo |= hasAlpha ? kCGImageAlphaPremultipliedFirst : kCGImageAlphaNoneSkipFirst;
        CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8, 0, YYCGColorSpaceGetDeviceRGB(), bitmapInfo);
        if(! context)return NULL;
        CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); // decode
        CGImageRef newImage = CGBitmapContextCreateImage(context);
        CFRelease(context);
        returnnewImage; . }Copy the code

The decoding core code is not hard to find, which is essentially converting CGImageRef data into bitmap data:

  • useCGBitmapContextCreate()Create the image context.
  • useCGContextDrawImage()Draw the picture into context.
  • useCGBitmapContextCreateImage()Generate images from context.

2. Progressive decoding

In the _updateSourceImageIO private method, you can see the progressive decompression logic. The main logic is as follows:

  • useCGImageSourceCreateIncremental(NULL)Create an empty image source.
  • useCGImageSourceUpdateData()Update image source
  • useCGImageSourceCreateImageAtIndex()Create a picture

Progressive decompression can be decompressed in the process of downloading pictures, display, to achieve the effect of showing pictures on the web page, good experience.

3, YYImageDecoder class use the lock

It is true that I am tired of continuing to look at obscure C code under ImageIO or CoreGraphics. I think it is good to know something about these things, but if the business needs to delve deeper, it is really too difficult to know all at once 😂.

Interestingly, YYImageDecoder uses two locks.

One is dispatch_semaphore_t _framesLock; The semaphore, as its name suggests, is used to protect NSArray *_frames; < Array

< Array

< Array

< Array



The other is pthread_mutex_t _lock; // recursive lock: // recursive lock: // recursive lock

pthread_mutexattr_t attr;
pthread_mutexattr_init (&attr);
pthread_mutexattr_settype (&attr, PTHREAD_MUTEX_RECURSIVE);
pthread_mutex_init (&_lock, &attr);
pthread_mutexattr_destroy (&attr);
Copy the code

Sure enough, the mutex pthread_mutex_t also supports recursive locks, which is a good substitute for the worse performing NSRecursiveLock.

So why use recursive locking here?

Mutexes have a feature that causes deadlocks when the same thread acquires the lock multiple times (while the lock is still unlocked), whereas recursive locks allow the same thread to acquire the lock multiple times, or “recursively” acquire the lock. That is, a recursive lock is reentrant for the same thread, but is still a mutex for multiple threads.

However, the author looked at the source code, it seems that there is no reentrant lock occurred, it is estimated that the author’s long-term consideration, reduce the possibility of code deadlock.

After the language

For this relatively large open source library, do not fall into the mistake of reading word for word, because a mature project is after many times of maintenance, the important thing is to understand the author’s ideas, understand some core things, this article throws out a brick to attract gems, do not like to spray.

Now, can readers say they know how to use pictures?

Reference: iOS image processing some small Tip mobile terminal image format research