preface

Image coding: In the current APP development, pictures are often used, and there are many formats for pictures, such as JPEG,PNG and so on. In fact, these various image formats are corresponding to bitmap after different algorithm encoding (compression) of the picture. (Coding is not covered here.)

Image decoding: The APP reads the encoded image from the disk and needs to decode the image into a bitmap before it can be displayed on the screen.

Bitmap: A bitmap is also called a bitmap, which means that a bitmap contains a large number of pixels, which are the points in the image. With information about each pixel in the image, the entire image can be rendered on the screen.

First, the nature of the picture

An image is essentially a bitmap, a two-dimensional array of pixels in which each pixel records information such as the color of that dot. And that’s what it looks like.

Since pixels store color data, this leads to the concept of a color storage format. For example, the most common 32-bit RGBA color storage format, which means that the space required for a pixel to store color is 32bits or 4bytes, and 1byte or 8bit storage is a channel, which corresponds to:

  • R = red (1byte or 8bit)
  • G = green (1byte or 8bit)
  • B = blue (1byte or 8bit)
  • A = alpha (1byte or 8bit)

So you know that the 32-bit RGBA format can display 2^8* 2^8* 2^8 (256 * 256 * 256), which is nearly 17 million colors. The concept of Color Spaces is not extended here.

A bitmap is an array of pixels, so you can get an idea of how much data the next normal bitmap contains! Also, the colors are explained here for the purpose of calculating bitmap sizes below, so that we can understand why we want to encode images.

Second, bitmaps need coding nature

Through iOS – graphics advanced processing (a, image display related theory) learning can know that image decompression is a very time-consuming CPU operation, and it is performed in the main thread by default. This can have a serious impact on the responsiveness of our application when there are a lot of images to load, especially on fast scrolling lists. In this case, the picture does not encode also need not decode, all use bitmap can? This is an obvious question to ask here. Here is why it is necessary to encode and decode images.

1. Reasons for bitmap coding:

A bitmap is 100 pixels wide and 100 pixels high. What is the size of this bitmap?

BytesPerPixel = 4 (R,G,B,A each byte), Size = width * height * bytesPerPixelCopy the code

By substituting our 100×100 bitmap into the formula, we can get its size:

size = 100 * 100 * 4 = 40000B = 39KB
Copy the code

A normal 100×100 image in PNG or JPEG format is only a few kilobytes. Bitmaps take up more space if they are larger, so bitmaps must be encoded for storage.

2. Bitmap coding technology:

Apple provides two image encoding formats, PNG and JPEG:

PNG is lossless compression, while JPEG can be lossy compression (0-100%), that is, losing some information to compress the image so that it will be smaller.

// return image as PNG. May return nil if image has no CGImageRef or invalid bitmap format
UIKIT_EXTERN NSData * __nullable UIImagePNGRepresentation(UIImage * __nonnull image);

// return image as JPEG. May return nil ifimage has no CGImageRef or invalid bitmap format. compression is 0(most).. 1(least) UIKIT_EXTERN NSData * __nullable UIImageJPEGRepresentation(UIImage * __nonnull image, CGFloat compressionQuality);Copy the code

Three, the nature of the picture needs to decode and the practical application of the project

1. Reasons for decoding the encoded picture:

The encoded picture needs to be displayed on the screen, and we need to get all the information of the picture, that is, the bitmap before the encoding. So the encoded picture must be decoded to display properly.

2. Three Buffer concepts mentioned in WWDC 2018:

A Buffer represents a contiguous area of memory. In this case, a Buffer is a series of memory regions with the same internal structure and size. There are three types of Buffer: Data Buffer, Image Buffer, and Frame Buffer. This theory is the concept described on WWDC apple in 2018. For details, please refer to Image and Graphics Best Practices

  • Data Buffer refers to raw Data stored in memory. Images can be saved in different formats, such as JPG and PNG. The information in the Data Buffer cannot be used to describe the bitmap pixel information of the image.
  • An Image Buffer is a memory representation of an Image, where each element describes a pixel. The size of the Image Buffer is equal to the size of the bitmap.
  • The Frame Buffer has the same content as the Image Buffer, but it is stored in vRAM (Video RAM), while the Image Buffer is stored in RAM.

3, picture reading decoding process :(part of the picture reading theory for referencePictures show relevant theories)

Picture decoding process:

1. Do this if you have a JPEG image or project resource in the local sandbox

UIImageView *imageView = ... ; // UIImage *image = [UIImage imageNamed:@"xxx"];
UIImage *image = [UIImage imageWithContentsOfFile:@"xxx.JPG"];
imageView.image = image;
Copy the code

UIImage is an advanced class for processing images in iOS. Creating a UIImage instance only loads the Data Buffer; That is to say, this is just turning the image into a UIImage object, which is stored in the Data Buffer. The image is not decoded at this point.

Implicit decoding is triggered when the image is displayed on the screen. (The image must be set to UIImageView and UIImageView added to the view to trigger image decoding.) So if you instantiate a UIImageView, but you don’t display it as an addSubview on the view, the system won’t decode it.

Practical problems arise:

This decoding process happens on the main thread by default, and it consumes a lot of CPU. Therefore, if there are a lot of pictures that need to be displayed in the tableView or collectionView, the decoding operation of these pictures in the main thread will inevitably affect the smoothness of sliding. So can we force it to be decoded in the child thread, and then have the system render the decoded image in the main thread? Of course, almost all open source photo libraries now do this. For example, YYImage\SDWebImage.

Practical solutions:

The principle of manual decoding is to redraw the picture and get a new decoded bitmap. The core function used is CGBitmapContextCreate:

CG_EXTERN CGContextRef __nullable CGBitmapContextCreate(void * __nullable data,size_t width, size_t height, size_t bitsPerComponent, size_t bytesPerRow,CGColorSpaceRef cg_nullable space, uint32_t bitmapInfo)CG_AVAILABLE_STARTING(__MAC_10_0, __IPHONE_2_0);
Copy the code

This method creates an image-processing context CGContext object, because the return value of the above method, CGContextRef, is actually CGContext *. There are many blog posts about this function in detail, the official document CGBitmapContextCreate. Blog posts, images decoded.

The open source framework’s solution is also based on this API:

1. Decoded code in YYImage:

CGImageRef YYCGImageCreateDecodedCopy(CGImageRef imageRef, BOOL decodeForDisplay) {
   if(! imageRef)return NULL;
   size_t width = CGImageGetWidth(imageRef);
   size_t height = CGImageGetHeight(imageRef);
   if (width == 0 || height == 0) return NULL;
   
   if (decodeForDisplay) { //decode with redraw (may lose some precision)
       CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef) & kCGBitmapAlphaInfoMask;
       BOOL hasAlpha = NO;
       if (alphaInfo == kCGImageAlphaPremultipliedLast ||
           alphaInfo == kCGImageAlphaPremultipliedFirst ||
           alphaInfo == kCGImageAlphaLast ||
           alphaInfo == kCGImageAlphaFirst) {
           hasAlpha = YES;
       }
       // BGRA8888 (premultiplied) or BGRX8888
       // same as UIGraphicsBeginImageContext() and -[UIView drawRect:]
       CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Host;
       bitmapInfo |= hasAlpha ? kCGImageAlphaPremultipliedFirst : kCGImageAlphaNoneSkipFirst;
       CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8, 0, YYCGColorSpaceGetDeviceRGB(), bitmapInfo);
       if(! context)return NULL;
       CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); // decode
       CGImageRef newImage = CGBitmapContextCreateImage(context);
       CFRelease(context);
       return newImage;
       
   } else{... }}Copy the code

In effect, this method creates a copy of the image. It takes an original bitmap parameter, imageRef, and returns a new decoded bitmap, newImage, through three steps:

  • Create a bitmap context using the CGBitmapContextCreate function;
  • Draw the raw bitmap into the context using the CGContextDrawImage function;
  • Using CGBitmapContextCreateImage function creates a new bitmap after decompression.

In fact, the image decompression process in SDWebImage is exactly the same as above, with minor differences in some parameters passed to CGBitmapContextCreate

2. Decoding realization of SDWebImage

+ (nullable UIImage *)decodedImageWithImage:(nullable UIImage *)image {
   if(! [UIImage shouldDecodeImage:image]) {return image;
   }
   
   // autorelease the bitmap context and all vars to help system to free memory when there are memory warning.
   // on iOS7, do not forget to call [[SDImageCache sharedImageCache] clearMemory];
   @autoreleasepool{
       
       CGImageRef imageRef = image.CGImage;
       CGColorSpaceRef colorspaceRef = [UIImage colorSpaceForImageRef:imageRef];
       
       size_t width = CGImageGetWidth(imageRef);
       size_t height = CGImageGetHeight(imageRef);
       size_t bytesPerRow = kBytesPerPixel * width;

       // kCGImageAlphaNone is not supported in CGBitmapContextCreate.
       // Since the original image here has no alpha info, use kCGImageAlphaNoneSkipLast
       // to create bitmap graphics contexts without alpha info.
       CGContextRef context = CGBitmapContextCreate(NULL,
                                                    width,
                                                    height,
                                                    kBitsPerComponent,
                                                    bytesPerRow,
                                                    colorspaceRef,
                                                    kCGBitmapByteOrderDefault|kCGImageAlphaNoneSkipLast);
       if (context == NULL) {
           return image;
       }
       
       // Draw the image into the context and retrieve the new bitmap image without alpha
       CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
       CGImageRef imageRefWithoutAlpha = CGBitmapContextCreateImage(context);
       UIImage *imageWithoutAlpha = [UIImage imageWithCGImage:imageRefWithoutAlpha
                                                        scale:image.scale
                                                  orientation:image.imageOrientation];
       
       CGContextRelease(context);
       CGImageRelease(imageRefWithoutAlpha);
       
       return imageWithoutAlpha;
   }
}

+ (BOOL)shouldDecodeImage:(nullable UIImage *)image {
   // Prevent "CGBitmapContextCreateImage: invalid context 0x0" error
   if (image == nil) {
       returnNO; } / /do not decode animated images
   if(image.images ! = nil) {return NO;
   }
   
   CGImageRef imageRef = image.CGImage;
   
   CGImageAlphaInfo alpha = CGImageGetAlphaInfo(imageRef);
   BOOL anyAlpha = (alpha == kCGImageAlphaFirst ||
                    alpha == kCGImageAlphaLast ||
                    alpha == kCGImageAlphaPremultipliedFirst ||
                    alpha == kCGImageAlphaPremultipliedLast);
   // do not decode images with alpha
   if (anyAlpha) {
       return NO;
   }
   
   return YES;
}
Copy the code

The difference between SDWebImage and other images is that if an image has an alpha component, it will return to the original image without decoding. This is done because the alpha component is unknown, so it is not processed to ensure the integrity of the original image.

SDWebImage has an AutoReleasepool attached to the decoding operation so that local variables can be released as soon as possible without high memory spikes when a large number of images need to be decoded.

Iv. Finally, apple’s solution to the pain point of big picture display:

The larger picture shows this problem, which seems to have nothing to do with image encoding and decoding. However, large images will occupy more memory resources, and decoding and transmission to GPU will also consume more time. Therefore, the actual size of the image that needs to be displayed may not be very large. If the large image can be reduced, the purpose of optimization can be achieved. The following is the large image display scheme given by WWDC. The function is to reduce the image and decode it:

1, Objective – C:

- (UIImage *)downsampleImageAt:(NSURL *)imageURL to:(CGSize)pointSize scale:(CGFloat)scale {// Create an image using the image file addresssourceNSDictionary * imageSourceOptions = @ {(__bridge nsstrings *) kCGImageSourceShouldCache: @ NO / / don't original image decoding}; CGImageSourceRef imageSource = CGImageSourceCreateWithURL((__bridge CFURLRef)imageURL, (__bridge CFDictionaryRef)imageSourceOptions); CGFloat maxDimensionInPixels = MAX(pointsize.width, pointsize.height) * scale; NSDictionary *downsampleOptions = @{ (__bridge NSString *)kCGImageSourceCreateThumbnailFromImageAlways: @YES, (__bridge NSString *)kCGImageSourceShouldCacheImmediately: @ YES, / / reduce image simultaneously decoding (__bridge nsstrings *) kCGImageSourceCreateThumbnailWithTransform: @YES, (__bridge NSString *)kCGImageSourceThumbnailMaxPixelSize: @(maxDimensionInPixels) }; CGImageRef downsampledImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, (__bridge CFDictionaryRef)downsampleOptions); UIImage *image = [[UIImage alloc] initWithCGImage:downsampledImage]; CGImageRelease(downsampledImage); CFRelease(imageSource);return image;
}
Copy the code

2, Swift,

// Downsampling large images for display at smaller size
func downsample(imageAt imageURL: URL, to pointSize: CGSize, scale: CGFloat) -> UIImage {
    let imageSourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
    let imageSource = CGImageSourceCreateWithURL(imageURL as CFURL, imageSourceOptions)!
    let maxDimensionInPixels = max(pointSize.width, pointSize.height) * scale
    let downsampleOptions =
    [kCGImageSourceCreateThumbnailFromImageAlways: true,
    kCGImageSourceShouldCacheImmediately: true,
    kCGImageSourceCreateThumbnailWithTransform: true,
    kCGImageSourceThumbnailMaxPixelSize: maxDimensionInPixels] as CFDictionary
 
    let downsampledImage =
    CGImageSourceCreateThumbnailAtIndex(imageSource, 0, downsampleOptions)!
    return UIImage(cgImage: downsampledImage)
}
Copy the code

Reference documentation

  • WWDC2018
  • Pictures show relevant theories
  • Image and Graphics Best Practices
  • Explore the decompression of iOS images into the rendering process
  • IOS image decompression