Image usage, memory comparison, and best practices in iOS

Preliminary knowledge

Our comparison focuses on memory usage. The contrasting formats are JPG and PNG, the two most widely used formats, which represent lossy and lossless compression respectively; For their characteristics and introduction, you can refer to this article by Guo Yaoyuan: A Survey of Mobile Image Formats. As we can see, on iOS devices their decoding consumption is on the same order of magnitude and faster. For wwdc2018, apple focuses on the memory usage of images (because the usual practice is actually relatively inefficient) and gives you some guidance. Here we will use demo experiments (and tools to benchmark) to reveal some facts and phenomena for you to understand and analyze in order to form best practices for our code.

Demo:DownSampleDemo

scenario

First, Apple tells us that the main memory footprint of an image in an app (which usually happens when the image is being loaded and displayed) is actually independent of the size of the image itself; What matters is the size of the picture. The decode buffer is calculated as width*height*N, where the N is usually 4(the most common ARGB888 format), depending on the format you use for the display. But a lot of times, we’re going to pass a UIImage directly to a UIImageView, and the size of that View may actually be much smaller than the UIImage itself.

Three ways to use images:

Mode A

image1.image = UIImage(named: "000.jpg")
Copy the code

This is the most common way we use images. One might think of ImageWithContentSoffile, but there is actually only one difference: imageNamed normally caches the image for the lifetime of the app, so there is no need to repeat decode; While imageWithContentSoffile does not have this cache, memory is freed when the image is not used. But that doesn’t help our case because we’re using these images and looking at their memory footprint.

Method B

+ (UIImage*)OriginImage:(UIImage *)image scaleToSize:(CGSize)size {// create a bitmap context and set it to the current context UIGraphicsBeginImageContext(size); [image drawInRect:CGRectMake(0, 0, size.width, size.height)]; / / create a change from the current context size images after the UIImage * scaledImage = UIGraphicsGetImageFromCurrentImageContext (); / / to make the current context of stack UIGraphicsEndImageContext (); // Return the new resized imagereturn scaledImage;
}
Copy the code

This is a widely used way to scale images.

Way to C

    func downsample(imageAt imageURL: URL, to pointSize: CGSize, scale: CGFloat) -> UIImage
    {
        let sourceOpt = [kCGImageSourceShouldCache : false] as CFDictionary // Other scenarios can use createWithData (data is not decode, memory is not as large),let source = CGImageSourceCreateWithURL(imageURL as CFURL, sourceOpt)!
        
        let maxDimension = max(pointSize.width, pointSize.height) * scale
        let downsampleOpt = [kCGImageSourceCreateThumbnailFromImageAlways : true,
                             kCGImageSourceShouldCacheImmediately : true ,
                             kCGImageSourceCreateThumbnailWithTransform : true,
                             kCGImageSourceThumbnailMaxPixelSize : maxDimension] as CFDictionary
        let downsampleImage = CGImageSourceCreateThumbnailAtIndex(source, 0, downsampleOpt)!
        return UIImage(cgImage: downsampleImage)
    }
Copy the code

This is a new approach introduced to us by Apple.

About the test

Iphone8 11.4 2. Iphone6 12beta2 We put 2560*1440 images in a 282*138 view, and we used 2x for the zoom to make sure it looked good.

Details of the experimental data are in the comments of DownSampleDemo’s ViewController.swift; Let me just tell you the conclusion:

  1. The memory footprint displayed on the xcode panel is unreliable, and in many cases the application uses much more memory than it shows (especially on iOS11.4 devices). But it can be used to look at the footprint roughly, and if it’s too much, it’s too much
  2. You can use the Allocation of Instruments and memgraph+ command line analysis of memory debugging to observe memory usage
  3. In the first way, the memory of JPG is divided into imageIO and IOkit after loading, and the memory of PNG is only imageIO, so the memory of JPG is much less than that of PNG
  4. The second way the data is distorted on the Xcode panel, and the other two ways it is very unreliable. The memory consumption may even exceed that of the first.
  5. The third method has a perfect memory footprint, strictly following the size calculated by the formula (depending on the size after your downsample), and the memory footprint is in CG Raster data.
  6. Memgraph command: vmmap –summary xxx.memgraph
  7. For allocation, we only need to look at the dirty size and swapped size under THE VM Allocation column. Here, we need to manually open snapshot to see the data in this column, and we need to run it repeatedly to ensure the correctness of the result

VMTracker running screenshot:

Finally, best practice from Apple: It is highly recommended that you use downsample for large images! Thank you! :]