preface

YYImage is a powerful image processing tool in YYKit framework, author ibireme, which includes the image codec and GIF playback and other functions

The network image is loaded by YYWebImage, whose logic is similar to that of SDWebImage, and YYCache is used to write the cache framework. YYImage is used as the image codec tool, which is similar to that of SDWebImage. Using POD to import YYWebImage automatically imports dependent libraries, namely YYCache and YYImage

YYImage profile

The YYImage framework mainly consists of the following files:

YYImage: inherits from UIImage, follows YYAnimatedImage protocol, as the entrance of basic image loading, overwrites the UIImage class method imageNamed:, can load PNG, JPG, GIF and other formats of images

YYFrameImage: Inherits from UIImage and follows YYAnimatedImage protocol, mainly responsible for the loading of frame animation pictures, and frame animation is a group of pictures (just like PPT, once quickly turned up, it becomes animation), and frame animation is composed of picture frame and each frame time period of the picture, and finally forms animation according to these parameters

YYSpriteSheetImage: Inherits from UIImage, follows YYAnimatedImage protocol, mainly responsible for Sprite animation loading, and Sprite animation is different from other animations, Sprite animation is composed of a picture (even if GIF decompression is also a group of pictures), this picture contains a lot of content, such as: Multiple actions of a character are saved to different locations of the picture, and the picture information of each location will be saved to a JSON file, which contains the frame information of each picture, etc. Generally, this scheme is used in the game to avoid the management reference address of multiple pictures

YYAnimatedImageView: Inherited from UIImageView, is the interface of UIKit to display pictures, which is mainly responsible for animation image playback, etc., with UIImage, YYImage, YYFrameImage, YYSpriteSheetImage use, Because the internal animation is written using CADisplayLink, the animation can be executed in DefaultMode of Runloop, which avoids the performance consumption of scrolling view sliding. The default is CommontMode. If you want to play GIF and other GIFs successfully, you must use this class. Rather than UIImageView

YYImageCoder: contains YYImageEncoder, YYImageDecoder, YYImageFrame and other classes, mainly responsible for image coding, decoding, and image related information preservation

YYImage features

Support the following types of animation image playback/encoding/decoding :WebP, APNG, GIF

Supports display/encoding/decoding of the following types of still images :WebP, PNG, GIF, JPEG, JP2, TIFF, BMP, ICO, ICNS

Support progressive/progressive/interlaced scan decoding of the following types of images :PNG, GIF, JPEG, BMP

Support frame animation of multiple pictures to play, support single picture Sprite sheet animation

Efficient dynamic memory cache management to ensure high performance low memory animation playback

Fully compatible with UIImage and UIImageView, easy to use

Retain extensible interfaces to support custom animations

Each class and method is well documented

YYImage source inquiry

YYImage mainly consists of the YYImage, YYFrameImage, YYSpriteSheetImage, YYAnimatedImageView, and YYImageCoder files. The following describes the files step by step

Among them, ImageIO is the core framework of Apple processing image information, which will be used in YYImageDecoder file

Picture Knowledge Introduction

Look at some information about the picture before studying the source code

DataBuffer: dataBuffer is an array of pixels, each of which represents a point in an image. PNG and JPEG are commonly used as bitmaps in applications

Pixels: The basic elements that make up an image, each containing only one color block

Image buffer: a specific buffer in which the elements describe the color value of each pixel in the image (RGBA four vectors comprise red, Green, Blue, and alpha), so the size of the image buffer is proportional to the size of the image

Frame buffer: It is responsible for saving the actual content rendered by the APP. When the view changes, UIKit will re-render the content and save it into the frame buffer. The frame buffer holds the color information of each pixel

The image rendering process can be roughly divided into the following steps:

Obtain load compression image data information, decompress image information, processing images, rendering content

YYFrameImage, YYSpriteSheetImage

YYFrameImage is used to load the Image class of frame animation, that is, a frame of a picture combination of animation, use as follows:

// File: frame1.png, frame2.png, frame3.png
NSArray *paths = @[@"/ani/frame1.png"The @"/ani/frame2.png"The @"/ani/frame3.png"];
NSArray *times = @[@0.1The @0.2The @0.1];
UIImage *image = [YYFrameImage alloc] initWithImagePaths:paths 
    frameDurations:times repeats:YES];
UIImageView *imageView = [YYAnimatedImageView alloc] initWithImage:image];
[self.view addSubview:imageView];
Copy the code

Initialize method by default first load a picture and decompress, use YYAnimatedImageView to play the animation back decompress the following picture, the following interface

- (instancetype)initWithImagePaths:(NSArray *)paths 
    frameDurations:(NSArray *)frameDurations 
    loopCount:(NSUInteger)loopCount {
    if (paths.count == 0) return nil;
    if(paths.count ! = frameDurations.count)return nil;
    
    NSString *firstPath = paths[0];
    NSData *firstData = [NSData dataWithContentsOfFile:firstPath];
    CGFloat scale = _NSStringPathScale(firstPath);
    // Copy a copy of the image after loading, and call yy_imageByDecoded to extract a new image
    / / end call YYCGImageCreateDecodedCopy method to generate images
    UIImage *firstCG = [[[UIImage alloc] initWithData:firstData] 
        yy_imageByDecoded];
    self = [self initWithCGImage:firstCG.CGImage scale:scale orientation:UIImageOrientationUp];
    if(! self)return nil;
    long frameByte = CGImageGetBytesPerRow(firstCG.CGImage) * CGImageGetHeight(firstCG.CGImage);
    _oneFrameBytes = (NSUInteger)frameByte;
    _imagePaths = paths.copy;
    _frameDurations = frameDurations.copy;
    _loopCount = loopCount;
    
    return self;
}
Copy the code

The use of YYSpriteSheetImage is also the same as the previous introduction, the need to get the picture corresponding to the content of the frame related information, the case is the use of the same size block, every time a certain block to take out the content, is just a way

// 8 * 12 sprites in a single sheet image
UIImage *spriteSheet = [UIImage imageNamed:@"sprite-sheet"];
NSMutableArray *contentRects = [NSMutableArray new];
NSMutableArray *durations = [NSMutableArray new];
for (int j = 0; j < 12; j++) {
   for (int i = 0; i < 8; i++) {
       CGRect rect;
       rect.size = CGSizeMake(img.size.width / 8, img.size.height / 12);
       rect.origin.x = img.size.width / 8 * i;
       rect.origin.y = img.size.height / 12 * j;
       [contentRects addObject:[NSValue valueWithCGRect:rect]];
       [durations addObject:@(1 / 60.0)];
   }
}
YYSpriteSheetImage *sprite;
sprite = [[YYSpriteSheetImage alloc] initWithSpriteSheetImage:img
                                                contentRects:contentRects
                                              frameDurations:durations
                                                   loopCount:0];
YYAnimatedImageView *imageView = [YYAnimatedImageView new];
imageView.size = CGSizeMake(img.size.width / 8, img.size.height / 12);
imageView.image = sprite;
[self.view addSubview:imageView];
Copy the code

YYSpriteSheetImage is initialized in a similar way, initializing the first image and recording the relevant information

- (instancetype)initWithSpriteSheetImage:(UIImage *)image
                            contentRects:(NSArray *)contentRects
                          frameDurations:(NSArray *)frameDurations
                               loopCount:(NSUInteger)loopCount {
    if(! image.CGImage)return nil;
    if (contentRects.count < 1 || frameDurations.count < 1) return nil;
    if(contentRects.count ! = frameDurations.count)return nil;
    
    self = [super initWithCGImage:image.CGImage 
        scale:image.scale orientation:image.imageOrientation];
    if(! self)return nil;
    
    _contentRects = contentRects.copy;
    _frameDurations = frameDurations.copy;
    _loopCount = loopCount;
    return self;
}
Copy the code

YYImage, YYImageCode

The above two for the image processing is less, we will start from YYImage, look at the image decompression image code to do what

First, YYImage inherits from UIImage, and overwrites its class method imageNamed: to begin the preparation of image information, that is, to load data data

+ (YYImage *)imageNamed:(NSString *)name {
    if (name.length == 0) return nil;
    if ([name hasSuffix:@"/"]) return nil;
    
    NSString *res = name.stringByDeletingPathExtension;
    NSString *ext = name.pathExtension;
    NSString *path = nil;
    CGFloat scale = 1;
    
    // If there is no extension, guess based on system support, same as UIImage
    NSArray *exts = ext.length > 0 ? @[ext] : @[@""The @"png"The @"jpeg"The @"jpg"The @"gif"The @"webp"The @"apng"];
    NSArray *scales = _NSBundlePreferredScales();
    for (int s = 0; s < scales.count; s++) {
        scale = ((NSNumber *)scales[s]).floatValue;
        NSString *scaledName = _NSStringByAppendingNameScale(res, scale);
        for (NSString *e in exts) {
            path = [[NSBundle mainBundle] pathForResource:scaledName ofType:e];
            if (path) break;
        }
        if (path) break;
    }
    if (path.length == 0) return nil;
    
    NSData *data = [NSData dataWithContentsOfFile:path];
    if (data.length == 0) return nil;
    // Call the initWithData:scale method to start preparing the operation related to decompressing the image
    return [[self alloc] initWithData:data scale:scale];
}
Copy the code

And then eventually you go to the initWithData: Scale: method and start extracting the image

- (instancetype)initWithData:(NSData *)data scale:(CGFloat)scale {
    if (data.length == 0) return nil;
    // Get the scale of the screen
    if (scale <= 0) scale = [UIScreen mainScreen].scale;
    _preloadedLock = dispatch_semaphore_create(1);
    //
    @autoreleasepool {
        // Use the YYImageDecoder class to decompress the image information
        // Get some information about the image, image width and height, frame number, image type
        YYImageDecoder *decoder = [YYImageDecoder decoderWithData:data scale:scale];
        // Through the decompression of the YYImageDecoder object, to obtain the data information of a frame (each frame information saved UIImage information)
        YYImageFrame *frame = [decoder frameAtIndex:0 decodeForDisplay:YES];
        UIImage *image = frame.image;
        if(! image)return nil;
        // Generate the image and set the rotation type
        self = [self initWithCGImage:image.CGImage 
            scale:decoder.scale orientation:image.imageOrientation];
        if(! self)return nil;
        _animatedImageType = decoder.type;
        if (decoder.frameCount > 1) {
            _decoder = decoder;
            _bytesPerFrame = CGImageGetBytesPerRow(image.CGImage) * 
                CGImageGetHeight(image.CGImage);
            _animatedImageMemorySize = _bytesPerFrame * decoder.frameCount;
        }
        self.yy_isDecodedForDisplay = YES;
    }
    return self;
}
Copy the code

Call decoderWithData using YYImageDecoder: After the method is initialized, the following method will eventually be called

You can see that the call to YYImageDetectType first detects the image format, and then starts to call _updateSource to update the data source

- (BOOL)updateData:(NSData *)data final:(BOOL)final {
    BOOL result = NO;
    // Mutex to avoid interference from multi-threaded operations
    pthread_mutex_lock(&_lock);
    result = [self _updateData:data final:final];
    pthread_mutex_unlock(&_lock);
    return result;
}
// Internal actual processing method
- (BOOL)_updateData:(NSData *)data final:(BOOL)final {
    if (_finalized) return NO;
    if (data.length < _data.length) return NO;
    _finalized = final;
    _data = data;
    // Check the image format
    YYImageType type = YYImageDetectType((__bridge CFDataRef)data);
    if (_sourceTypeDetected) {
        if(_type ! = type) {return NO;
        } else {
            // Unzip the update data[self _updateSource]; }}else {
        if (_data.length > 16) {
            _type = type;
            _sourceTypeDetected = YES;
            // Unzip the update data[self _updateSource]; }}return YES;
}
Copy the code

YYImageDetectType Checks the image format. The first 8 bytes of the image contain the format information of the image. The following is a method to identify the image format

// C1, C2, C3, c4 are 1-byte contents, which hold the type information of the image
// This enumeration uses a 4-byte (32-bit) size to save the type information of the image by shift + bitwise and, avoiding formatting duplication
#define YY_FOUR_CC(c1,c2,c3,c4) ((uint32_t)(((c4) << 24) | ((c3) << 16) | ((c2) << 8) | (c1)))
YYImageType YYImageDetectType(CFDataRef data) {
    if(! data)return YYImageTypeUnknown;
    //uint64_t = 8 bytes to get the first 8 bytes of data in the data
    uint64_t length = CFDataGetLength(data);
    if (length < 16) return YYImageTypeUnknown;
    
    const char *bytes = (char *)CFDataGetBytePtr(data);
    // First four bytes
    uint32_t magic4 = *((uint32_t *)bytes);
    switch (magic4) {
        case YY_FOUR_CC(0x4D.0x4D.0x00.0x2A) : {// big endian TIFF
            return YYImageTypeTIFF;
        } break;
            
        case YY_FOUR_CC(0x49.0x49.0x2A.0x00) : {// little endian TIFF
            return YYImageTypeTIFF;
        } break;
            
        case YY_FOUR_CC(0x00.0x00.0x01.0x00) : {// ICO
            return YYImageTypeICO;
        } break;
            
        case YY_FOUR_CC(0x00.0x00.0x02.0x00) : {// CUR
            return YYImageTypeICO;
        } break;
            
        case YY_FOUR_CC('i'.'c'.'n'.'s') : {// ICNS
            return YYImageTypeICNS;
        } break;
            
        case YY_FOUR_CC('G'.'I'.'F'.'8') : {// GIF
            return YYImageTypeGIF;
        } break;
        // 89 50 4E 47 (. P N G)
        case YY_FOUR_CC(0x89.'P'.'N'.'G') : {// PNG
            uint32_t tmp = *((uint32_t *)(bytes + 4));
            if (tmp == YY_FOUR_CC('\r'.'\n'.0x1A.'\n')) {
                returnYYImageTypePNG; }}break;
            
        case YY_FOUR_CC('R'.'I'.'F'.'F') : {// WebP
            uint32_t tmp = *((uint32_t *)(bytes + 8));
            if (tmp == YY_FOUR_CC('W'.'E'.'B'.'P')) {
                returnYYImageTypeWebP; }}break;
        /* case YY_FOUR_CC('B', 'P', 'G', 0xFB): { // BPG return YYImageTypeBPG; } break; * /
    }
    
    uint16_t magic2 = *((uint16_t *)bytes);
    switch (magic2) {
        case YY_TWO_CC('B'.'A') :case YY_TWO_CC('B'.'M') :case YY_TWO_CC('I'.'C') :case YY_TWO_CC('P'.'I') :case YY_TWO_CC('C'.'I') :case YY_TWO_CC('C'.'P') : {// BMP
            return YYImageTypeBMP;
        }
        case YY_TWO_CC(0xFF.0x4F) : {// JPEG2000
            returnYYImageTypeJPEG2000; }}// JPG FF D8 FF
    if (memcmp(bytes,"\ 377\330\377".3) = =0) return YYImageTypeJPEG;
    
    // JP2
    if (memcmp(bytes + 4."\ 152\120\040\040\015".5) = =0) return YYImageTypeJPEG2000;
    
    return YYImageTypeUnknown;
}
Copy the code

_updateSource is the decompression solution for the image, as shown below

Usually the loaded GIFs are no more than three kinds of WebP, APNG, GIF - (void)_updateSource {
    switch (_type) {
        case YYImageTypeWebP: {
            [self _updateSourceWebP];
        } break;
            
        case YYImageTypePNG: {
            [self _updateSourceAPNG];
        } break;
            
        default: {
            [self _updateSourceImageIO];
        } break; }}Copy the code

When processing images, we’ll look at the _updateSourceImageIO method, which is the most commonly used GIF format

- (void)_updateSourceImageIO {
    _width = 0;
    _height = 0;
    _orientation = UIImageOrientationUp; // The default image orientation is set upwards
    _loopCount = 0; // Loop number of GIF images
    dispatch_semaphore_wait(_framesLock, DISPATCH_TIME_FOREVER);
    _frames = nil;
    dispatch_semaphore_signal(_framesLock);
    //ImageIO obtains the ImageSource parameter based on data
    if(! _source) {if (_finalized) {
            //
            _source = CGImageSourceCreateWithData((__bridge CFDataRef)_data, NULL);
        } else {
            _source = CGImageSourceCreateIncremental(NULL);
            if (_source) CGImageSourceUpdateData(_source, (__bridge CFDataRef)_data, false); }}else {
        CGImageSourceUpdateData(_source, (__bridge CFDataRef)_data, _finalized);
    }
    if(! _source)return;
    // Obtain the number of frames in the image from source
    _frameCount = CGImageSourceGetCount(_source);
    if (_frameCount == 0) return;
    
    if(! _finalized) {// ignore multi-frame before finalized
        _frameCount = 1;
    } else {
        if (_type == YYImageTypePNG) { // use custom apng decoder and ignore multi-frame
            _frameCount = 1; } If the image type is GIF, the image related information is obtained through the methods in the ImageIO frameif (_type == YYImageTypeGIF) { // get gif loop count
            // Get the attributes of the image
            CFDictionaryRef properties = CGImageSourceCopyProperties(_source, NULL);
            if (properties) {
                CFDictionaryRef gif = CFDictionaryGetValue(properties,
                    kCGImagePropertyGIFDictionary);
                if (gif) {
                    // The number of loops required
                    CFTypeRef loop = CFDictionaryGetValue(gif, kCGImagePropertyGIFLoopCount);
                    if(loop) CFNumberGetValue(loop, kCFNumberNSIntegerType, &_loopCount); } CFRelease(properties); }}}/* ICO, GIF, APNG may contains multi-frame. */
    NSMutableArray *frames = [NSMutableArray new];
    for (NSUInteger i = 0; i < _frameCount; i++) {
        // Get the data for each frame of the GIF (normal picture only has one frame,frameCount is 1)
        //_YYImageDecoderFrame is a subclass of YYImageFrame, saving the basic information of each frame image width and height
        _YYImageDecoderFrame *frame = [_YYImageDecoderFrame new];
        frame.index = i;
        frame.blendFromIndex = i; // The number of frames rendered
        frame.hasAlpha = YES;
        frame.isFullSize = YES;
        [frames addObject:frame];
        
        // Retrieve the image information and save it to the _YYImageDecoderFrame object
        CFDictionaryRef properties = CGImageSourceCopyPropertiesAtIndex(_source, i, NULL);
        if (properties) {
            NSTimeInterval duration = 0;
            NSInteger orientationValue = 0, width = 0, height = 0;
            CFTypeRef value = NULL;
            // Obtain the width and height information
            value = CFDictionaryGetValue(properties, kCGImagePropertyPixelWidth);
            if (value) CFNumberGetValue(value, kCFNumberNSIntegerType, &width);
            value = CFDictionaryGetValue(properties, kCGImagePropertyPixelHeight);
            if (value) CFNumberGetValue(value, kCFNumberNSIntegerType, &height);
            if (_type == YYImageTypeGIF) {
                CFDictionaryRef gif = CFDictionaryGetValue(properties, 
                    kCGImagePropertyGIFDictionary);
                if (gif) {
                    // Use the unclamped frame delay if it exists.
                    / / get the frame picture of playing time (key = kCGImagePropertyGIFUnclampedDelayTime);
                    value = CFDictionaryGetValue(gif, kCGImagePropertyGIFUnclampedDelayTime);
                    if(! value) {/ / if through kCGImagePropertyGIFUnclampedDelayTime didn't get to play
                        / / passed kCGImagePropertyGIFDelayTime to obtain, the meaning is the same
                        value = CFDictionaryGetValue(gif, kCGImagePropertyGIFDelayTime);
                    }
                    if (value) CFNumberGetValue(value, kCFNumberDoubleType, &duration);
                }
            }
            
            frame.width = width;
            frame.height = height;
            frame.duration = duration;
            
            // In the first frame, update the _width and _height information of the image, and obtain the rotation information of the image
            if (i == 0 && _width + _height == 0) { // init first frame
                _width = width;
                _height = height;
                value = CFDictionaryGetValue(properties, kCGImagePropertyOrientation);
                if (value) {
                    CFNumberGetValue(value, kCFNumberNSIntegerType, &orientationValue);
                    _orientation = YYUIImageOrientationFromEXIFValue(orientationValue);
                }
            }
            CFRelease(properties);
        }
    }
    dispatch_semaphore_wait(_framesLock, DISPATCH_TIME_FOREVER);
    _frames = frames;
    dispatch_semaphore_signal(_framesLock);
}
Copy the code

Created for compressed image basic information decoderWithData: scale method is introduced after, began to introduce frameAtIndex: decodeForDisplay method to extract a frame, When finally after the call, go to a private method _frameAtIndex: decodeForDisplay: to extract the image into _YYImageDecoderFrame class

- (YYImageFrame *)_frameAtIndex:(NSUInteger)index decodeForDisplay:(BOOL)decodeForDisplay {
    if (index >= _frames.count) return 0;
    // Get the picture information class for the specified frame
    _YYImageDecoderFrame *frame = [(_YYImageDecoderFrame *)_frames[index] copy];
    BOOL decoded = NO;
    BOOL extendToCanvas = NO;
    if(_type ! = YYImageTypeICO && decodeForDisplay) { extendToCanvas = YES; }// Whether rendering is required
    if(! _needBlend) {// Dye starts decompressing the image. This step will get the decompressed image directly
        CGImageRef imageRef = [self _newUnblendedImageAtIndex:index
            extendToCanvas:extendToCanvas decoded:&decoded];
        if(! imageRef)return nil;
        if(decodeForDisplay && ! decoded) { CGImageRef imageRefDecoded = YYCGImageCreateDecodedCopy(imageRef, YES);if (imageRefDecoded) {
                CFRelease(imageRef);
                imageRef = imageRefDecoded;
                decoded = YES;
            }
        }
        UIImage *image = [UIImage imageWithCGImage:imageRef scale:_scale 
            orientation:_orientation];
        CFRelease(imageRef);
        if(! image)return nil;
        image.yy_isDecodedForDisplay = decoded;
        frame.image = image;
        return frame;
    }
    
    // Processing the information that needs rendering
    if(! [self _createBlendContextIfNeeded])return nil;
    CGImageRef imageRef = NULL;
    
    // Extract the image information from the rendered frame and the given frame
    if (_blendFrameIndex + 1 == frame.index) {
        imageRef = [self _newBlendedImageWithFrame:frame];
        _blendFrameIndex = index;
    } else { // should draw canvas from previous frame
        _blendFrameIndex = NSNotFound;
        CGContextClearRect(_blendCanvas, CGRectMake(0.0, _width, _height));
        
        if (frame.blendFromIndex == frame.index) {
            CGImageRef unblendedImage = [self _newUnblendedImageAtIndex:index 
                extendToCanvas:NO decoded:NULL];
            if (unblendedImage) {
                CGContextDrawImage(_blendCanvas, CGRectMake(frame.offsetX, 
                    frame.offsetY, frame.width, frame.height), unblendedImage);
                CFRelease(unblendedImage);
            }
            imageRef = CGBitmapContextCreateImage(_blendCanvas);
            if (frame.dispose == YYImageDisposeBackground) {
                CGContextClearRect(_blendCanvas, CGRectMake(frame.offsetX, 
                    frame.offsetY, frame.width, frame.height));
            }
            _blendFrameIndex = index;
        } else { // canvas is not ready
            for (uint32_t i = (uint32_t)frame.blendFromIndex; i <= (uint32_t)frame.index; i++) {
                if (i == frame.index) {
                    if(! imageRef) imageRef = [self _newBlendedImageWithFrame:frame]; }else{ [self _blendImageWithFrame:_frames[i]]; } } _blendFrameIndex = index; }}if(! imageRef)return nil;
    UIImage *image = [UIImage imageWithCGImage:imageRef scale:_scale orientation:_orientation];
    CFRelease(imageRef);
    if(! image)return nil;
    
    image.yy_isDecodedForDisplay = YES;
    frame.image = image;
    if (extendToCanvas) {
        frame.width = _width;
        frame.height = _height;
        frame.offsetX = 0;
        frame.offsetY = 0;
        frame.dispose = YYImageDisposeNone;
        frame.blend = YYImageBlendNone;
    }
    return frame;
}
Copy the code

Unzip _newUnblendedImageAtIndex: extendToCanvas: decoded method implementation as shown below

- (CGImageRef)_newUnblendedImageAtIndex:(NSUInteger)index
                         extendToCanvas:(BOOL)extendToCanvas
                                decoded:(BOOL *)decoded CF_RETURNS_RETAINED {
    
    if(! _finalized && index >0) return NULL;
    if (_frames.count <= index) return NULL;
    _YYImageDecoderFrame *frame = _frames[index];
    
    if (_source) {
        // Obtain CGImageRef image information from source and index
        CGImageRef imageRef = CGImageSourceCreateImageAtIndex(_source, index, 
            (CFDictionaryRef)@{(id)kCGImageSourceShouldCache:@(YES)});
        if (imageRef && extendToCanvas) {
            // Obtain the current index image width and height information
            size_t width = CGImageGetWidth(imageRef);
            size_t height = CGImageGetHeight(imageRef);
             // If the image size is the same, copy the image directly and save it
            if (width == _width && height == _height) {
                CGImageRef imageRefExtended = YYCGImageCreateDecodedCopy(imageRef, YES);
                if (imageRefExtended) {
                    CFRelease(imageRef);
                    imageRef = imageRefExtended;
                    if(decoded) *decoded = YES; }}else {
            // The size of the image is not consistent with the Settings
                //RGBA (4 * 8 bits)
                // Decompress the image w * h * s
                //data: memory space (),NULL automatically allocated by the system
                //w h: the width and height of the pixel
                //bitmapINfo: bitmap layout information, usually given to 8
                / / ARGB RGBA kCGImageAlphaPremultipliedFirst (specify the vector sequence)
                // Small - end mode: Low - end addresses store high - end addresses, and high - end addresses store low - end addresses
                CGContextRef context = CGBitmapContextCreate(NULL, _width, _height, 8.0, YYCGColorSpaceGetDeviceRGB(), 
                    kCGBitmapByteOrder32Host | kCGImageAlphaPremultipliedFirst);
                ifCGContextDrawImage(context, CGRectMake(context) {contextdrawImage (context, CGRectMake(context);0, _height - height, width, height), imageRef);
                    CGImageRef imageRefExtended = CGBitmapContextCreateImage(context);
                    CFRelease(context);
                    if (imageRefExtended) {
                        CFRelease(imageRef);
                        imageRef = imageRefExtended;
                        if(decoded) *decoded = YES; }}}}return imageRef;
    }
    // Other types of image processing are not covered here. #endifreturn NULL;
}
Copy the code

YYAnimatedImageView

The image information loaded by YYImage finally needs to be submitted to YYAnimatedImageView before rendering, so YYAnimatedImageView handles the rendering logic of the picture

They all have similar init methods, so here’s one

The default rendering runloopMode used is CommonModes

- (instancetype)initWithImage:(UIImage *)image {
    self = [super init];
    //
    _runloopMode = NSRunLoopCommonModes;
    
    _autoPlayAnimatedImage = YES;
    self.frame = (CGRect) {CGPointZero, image.size };
    self.image = image;
    return self;
}
Copy the code

It really kicks in when you set the image using the set method, which eventually calls the setImage:withType method

- (void)setImage:(UIImage *)image {
    if (self.image == image) return;
    [self setImage:image withType:YYAnimatedImageTypeImage];
}
- (void)setImage:(id)image withType:(YYAnimatedImageType)type {
    // Stop the animation
    [self stopAnimating];
    // Reset the animation operation
    // Link for CADisplayLink, each frame will be callback timer, very suitable for picture animation processing
    if (_link) [self resetAnimated];
    _curFrame = nil;
    switch (type) {
        case YYAnimatedImageTypeNone: break;
        case YYAnimatedImageTypeImage: super.image = image; break;
        case YYAnimatedImageTypeHighlightedImage: super.highlightedImage = image; break;
        case YYAnimatedImageTypeImages: super.animationImages = image; break;
        case YYAnimatedImageTypeHighlightedImages: super.highlightedAnimationImages = image; break;
    }
    // Callback after image change
    [self imageChanged];
}
Copy the code

ImageChanged takes the information needed for the GIF, including rendering areas, and initializes the timer

- (void)imageChanged {
    YYAnimatedImageType newType = [self currentImageType];
    id newVisibleImage = [self imageForType:newType];
    NSUInteger newImageFrameCount = 0;
    BOOL hasContentsRect = NO;
    if ([newVisibleImage isKindOfClass:[UIImage class] && // Check whether the GIFs protocol is followed and get the number of giFs [newVisibleImage conformsToProtocol: @protocol(YYAnimatedImage)]) {
        newImageFrameCount = ((UIImage<YYAnimatedImage> *) newVisibleImage).
            animatedImageFrameCount;
        if (newImageFrameCount > 1) {
            hasContentsRect = [((UIImage<YYAnimatedImage> *) newVisibleImage)
                respondsToSelector:@selector(animatedImageContentsRectAtIndex:)]; }}// Get the rendered area, not required if it is not Sprite type
    if(! hasContentsRect && _curImageHasContentsRect) {if(! CGRectEqualToRect(self.layer.contentsRect, CGRectMake(0.0.1.1))) {// Transaction, drawing, display
            [CATransaction begin];
            [CATransaction setDisableActions:YES];
            self.layer.contentsRect = CGRectMake(0.0.1.1);
            [CATransaction commit];
        }
    }
    _curImageHasContentsRect = hasContentsRect;
    // This is what Sprite type images need to be processed by switching render areas to implement the GIF
    if (hasContentsRect) {
        CGRect rect = [((UIImage<YYAnimatedImage> *) newVisibleImage) 
            animatedImageContentsRectAtIndex:0];
        [self setContentsRect:rect forImage:newVisibleImage];
    }
    
    // If the number of frames is greater than 1, start initializing animation parameters
    if (newImageFrameCount > 1) {
        [self resetAnimated];
        _curAnimatedImage = newVisibleImage;
        _curFrame = newVisibleImage;
        _totalLoop = _curAnimatedImage.animatedImageLoopCount;
        _totalFrameCount = _curAnimatedImage.animatedImageFrameCount;
        [self calcMaxBufferCount];
    }
    // Mark RunLoop, timer (CADisplayLink)
    [self setNeedsDisplay];
    [self didMoved];
}
Copy the code

ResetAnimated initializes the space necessary for animation execution

- (void)resetAnimated {
    if(! _link) { _lock = dispatch_semaphore_create(1);
        _buffer = [NSMutableDictionary new];
        // An asynchronous serial queue, which ensures that tasks are executed one by one, is used to load the next frame of image information
        _requestQueue = [[NSOperationQueue alloc] init];
        _requestQueue.maxConcurrentOperationCount = 1;
        // Play the timer of the current image
        _link = [CADisplayLink displayLinkWithTarget:[_YYImageWeakProxy proxyWithTarget:self] selector:@selector(step:)];
        // Add to the specified runloopMode
        if (_runloopMode) {
            [_link addToRunLoop:[NSRunLoop mainRunLoop] forMode:_runloopMode];
        }
        // Pause the timer by default
        _link.paused = YES;
        
        // Memory warning and operation logic to cut into the background
        [[NSNotificationCenter defaultCenter] addObserver:self 
            selector:@selector(didReceiveMemoryWarning:) 
            name:UIApplicationDidReceiveMemoryWarningNotification object:nil];
        [[NSNotificationCenter defaultCenter] addObserver:self 
        selector:@selector(didEnterBackground:) 
        name:UIApplicationDidEnterBackgroundNotification object:nil];
    }
    // Cancel queuing
    [_requestQueue cancelAllOperations];
    LOCK(
        // The child thread releases the buffer to reduce overhead
         if (_buffer.count) {
             NSMutableDictionary *holder = _buffer;
             _buffer = [NSMutableDictionary new];
             dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^ {// Capture the dictionary to global queue,
                 // release these images in background to avoid blocking UI thread.
                 [holder class]; }); }); _link.paused = YES; _time =0;
    if(_curIndex ! =0) {
        [self willChangeValueForKey:@"currentAnimatedImageIndex"];
        _curIndex = 0;
        [self didChangeValueForKey:@"currentAnimatedImageIndex"];
    }
    _curAnimatedImage = nil;
    _curFrame = nil;
    _curLoop = 0;
    _totalLoop = 0;
    _totalFrameCount = 1;
    _loopEnd = NO;
    _bufferMiss = NO;
    _incrBufferCount = 0;
}
Copy the code

When the basic parameters are initialized, we start our animation through CADisplayLink. The loop of CADisplayLink is related to the screen refresh frequency, usually 60 times (frames)/s

// Refresh frequency is related
- (void)step:(CADisplayLink *)link {
    // Get the image object that follows the GIF protocol
    UIImage <YYAnimatedImage> *image = _curAnimatedImage;
    NSMutableDictionary *buffer = _buffer;
    UIImage *bufferedImage = nil;
    // The index of the next frame is set to modulus in order to ensure that the animation can play in a loop
    NSUInteger nextIndex = (_curIndex + 1) % _totalFrameCount;
    BOOL bufferIsFull = NO;
    
    if(! image)return;
    // If you have finished playing, end the animation
    if (_loopEnd) { // view will keep in last frame
        [self stopAnimating];
        return;
    }
    //YYImageFrame:image
    NSTimeInterval delay = 0;
    // The image is normal
    if(! _bufferMiss) {//
        _time += link.duration;
        // Get the play time of the next framedelay = [image animatedImageDurationAtIndex:_curIndex]; If the playing time is insufficient, continue to playif (_time < delay) return; // If less than, return
        _time -= delay; // If the play time is long enough, get ready to switch to the next frame
        // The start of the next round, in which to determine whether the animation should be finished
        if (nextIndex == 0) {
            _curLoop++;
            // The number of loops is increased. If the number of loops has reached, the animation is terminated
            if(_curLoop >= _totalLoop && _totalLoop ! =0) {
                _loopEnd = YES;
                [self stopAnimating];
                [self.layer setNeedsDisplay]; // Update the screen
                return; // stop at last frame}}// Get the duration of the next frame, and continue to accumulate the duration of this frame, repeat the operation
        delay = [image animatedImageDurationAtIndex:nextIndex];
        if (_time > delay) _time = delay; // do not jump over frame
    }
    LOCK(
        // Retrieve the next frame from the buffer cache
         bufferedImage = buffer[@(nextIndex)];
         if (bufferedImage) {
             // Find the next frame from the cache
             if ((int)_incrBufferCount < _totalFrameCount) {
                 [buffer removeObjectForKey:@(nextIndex)];
             }
             [self willChangeValueForKey:@"currentAnimatedImageIndex"];
             _curIndex = nextIndex;
             [self didChangeValueForKey:@"currentAnimatedImageIndex"];
             _curFrame = bufferedImage == (id)[NSNull null]? nil : bufferedImage;// The render interval required for Sprite animation
             if (_curImageHasContentsRect) {
                 _curContentsRect = [image animatedImageContentsRectAtIndex:_curIndex];
                 [self setContentsRect:_curContentsRect forImage:_curFrame];
             }
             nextIndex = (_curIndex + 1) % _totalFrameCount;
             _bufferMiss = NO;
             if(buffer.count == _totalFrameCount) { bufferIsFull = YES; }}else {
             // The next image was not found in cache_bufferMiss = YES; })//LOCK
    
    // The image gets the update layer
    if(! _bufferMiss) { [self.layer setNeedsDisplay];// let system call `displayLayer:` before runloop sleep
    }
    // If there is no image in the cache, start the thread immediately to get the image information, and then proceed to the next round
    if(! bufferIsFull && _requestQueue.operationCount ==0) { // if some work not finished, wait for next opportunity
        // Asynchronous threads, serial queues, are rewritten to load image information
        _YYAnimatedImageViewFetchOperation *operation = [_YYAnimatedImageViewFetchOperation new]; operation.view = self; operation.nextIndex = nextIndex; operation.curImage = image; [_requestQueue addOperation:operation]; }}Copy the code

The last

YYImage is introduced here, if you want to understand more details, you can view the source code, if you do not know how to use, or there are problems with the use, there is also a case introduction

In the end, every share is a receiving