This article is participating in the topic of “performance optimization actual combat record” essay activity
In iOS development, UIImage is an object that occupies a large amount of mobile phone memory during development. If the memory usage is too large during operation, the battery life will be affected; if the memory usage exceeds the maximum value, the App will crash. This article from the picture loading principle and SDWebImage source code to introduce the perspective of picture loading.
Image rendering process
In iOS, we use UIImage and UIImageView to record images. They follow the classic MVC architecture, UIImage equals Model and UIImageView equals View:
UIImage loads the image, UIImageView renders the image.
The image rendering process is divided into 3 stages: Load, Decoder and Render.
There’s going to be a corresponding at each stageThe buffer
:DataBuffer (DataBuffer), imageBuffer (imageBuffer) and framebuffer (framebuffer)
.
To analyze the buffers for the first two phases, load an image with a size of 2048 px by 1536 px and a disk size of 590KB.
DataBuffer
DataBuffer is simply a buffer containing a series of bytes. It usually starts with some metadata that describes the size of the image stored in the data buffer and contains the image data itself, which is encoded in some form such as JPEG compression or PNG, meaning that the byte does not directly describe anything about the pixels in the image. The DataBuffer size at this point is 590KB.
SD source code analysis
In SDWebImage, after the image is loaded, the format of the image is determined by the DataBuffer’s first byte in the sd_imageFormatForImageData method.
uint8_t c; [data getBytes:&c length:1]; switch (c) { case 0xFF: return SDImageFormatJPEG; case 0x89: return SDImageFormatPNG; case 0x47: retur SDImageFormatGIF; case 0x49: case 0x4D: return SDImageFormatTIFF; . }Copy the code
ImageBuffer
After the image is loaded, JPEG, PNG or other encoded Data of the Data Buffer need to be converted into image information of each pixel. This process, called Decoder, stores pixel information in the ImageBuffer.
Occupied Memory size
The size of the image is related to the size of the image, regardless of its file size. In iOSSRGB display format (4byte space to display a pixel), if all pixels are parsed, 2048 px * 1536 px * 4byte /px = 10MB space is required. The size of the ImageBuffer is 10MB.
After the ImageBuffer is parsed, it is submitted to the frameBuffer for rendering.
In general, the image loading process and memory consumption are shown below:
Xcode test
In an Xcode project, only one image is loaded when a new page is pushed.
Memory value before loading:
Memory value after loading:
In most cases, we don’t need to display images with such high precision, so it takes up so much memory. Can we reduce the memory used when loading images?
How to reduce image footprint
Down sampling
In the official apple documentation, it is recommended that we use Downsampleing to load images and reduce the size of the ImageBuffer.
The method is as follows:
func downsample(imageAt imageURL: URL, to pointSize:CGSize, scale:CGFloat) ->UIImage {
let imageSourcesOptions = [kCGImageSourceShouldCache: false] as CFDictionary
let imageSource = CGImageSourceCreateWithURL(imageURL as CFURL, imageSourcesOptions)!
let maxDimensionInPixels = max(pointSize.width, pointSize.height) * scale
let downsampleOptions = [
kCGImageSourceCreateThumbnailFromImageAlways: true,
kCGImageSourceShouldCacheImmediately: true,
kCGImageSourceCreateThumbnailWithTransform: true,
kCGImageSourceThumbnailMaxPixelSize:maxDimensionInPixels
] as CFDictionary
let downsampledImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, downsampleOptions)!
return UIImage(cgImage: downsampledImage)
}
Copy the code
Let’s test it out:
let imageStr = Bundle.main.path(forResource: "view_site.jpeg", ofType: nil)
let imageURL = URL(string: "file://" + (imageStr ?? ""))
guard let imgURL = imageURL else {
return
}
imageView.image = downsample(imageAt:imgURL , to: CGSize(width: 200, height: 200), scale: UIScreen.main.scale)
Copy the code
Before loading is13M
After loading, is17M
The effect is significant and the savings are approximately5M
Memory space.
When compressing images, we should prefer downsampling techniques.
SD source code analysis decoding process
In SDWebIamge, there are three types of decoders: SDImageIOCoder, SDImageGIFCoder, and SDImageAPNGCoder, which use the corresponding encoder according to DataBuffer’s encoding type.
In -(UIImage *)decodedImageWithData:(NSData *)data method, configure the decoding parameters, start decoding operation.
At + (UIImage *)createFrameAtIndex:(NSUInteger)index source:(CGImageSourceRef)source scale:(CGFloat)scale PreserveAspectRatio :(BOOL)preserveAspectRatio: thumbnailSize:(CGSize)thumbnailSize options:(NSDictionary) Complete image decoding
Select the correct image rendering format
Rendering format
In iOS, there are four render image formats
Alpha 8 Format
:1 byte
According to1 pixel
, good at displaying monochromatic pictures.Luminance and alpha 8 format
: Brightness and Alpha 8 format,2 -
According to1 pixel
, good at showing transparent monochromatic pictures.SRGB Format
: 4-byte display1 pixel
.Wide Format
: Wide color gamut format, 8 bytes display1 pixel
. Suitable for high precision pictures,
How to choose the correct rendering format
The correct way to think about it is: don’t choose the render format, let the render format choose you.
Use UIGraphicsImageRender to replace UIGraphicsBeginImageContextWithOptions, the former after iOS12 will automatically choose the rendering Format, the latter will choose SRGB Format by default.
func render() -> UIImage{
let bounds = CGRect(x: 0, y: 0, width: 300, height: 100)
let render = UIGraphicsImageRenderer(size: bounds.size)
let image = render.image { context in
UIColor.blue.setFill()
let path = UIBezierPath(roundedRect: bounds, byRoundingCorners: UIRectCorner.allCorners, cornerRadii: CGSize(width: 20, height: 20))
path.addClip()
UIRectFill(bounds)
}
return image
}
Copy the code
In this case, the system automatically selects Alpha 8 Format, which reduces the content space usage by 75%.
Reduce the use of backup storage
Reduce or do not use the draw(rect:) method
When you need to draw a View with a subview, do not use draw(rect:) method, use the system View attribute or add a subview, the drawing work to the system to deal with.
The backgroundColor is set directly with uiview. backgroundColor instead of using draw(rect:)
How do I load images in a list
In our development, we generally load the picture asynchronously by sub-thread, and decode and downsample it in the background. There are times when a lot of images are loaded in a list, and you should be aware of thread explosions.
Thread explosion
When we require the system to do work more than the CPU can work when this occurs, we must show eight pictures, for example, but we only have two CPU, will not be able to finish all this work at a time, in does not exist on the CPU parallel processing, in order to avoid the task assigned to a global queue asynchronous deadlock, The GCD will create new threads to capture the work that we’re asking it to do, and then the CPU will spend a lot of time switching between those threads, trying to make incremental progress on all the work that we’re asking the operating system to do for us, and switching between those threads is actually quite expensive, Now is not simply one of referral to the global asynchronous queue, but create a serial queue, in prefetching method, asynchronous will work assignment in the queue, it means that a single image to load, may be later than before to be able to make progress, but the CPU will spend less time in it can do little to switch back and forth between tasks.
In the SDWebImage, decoding the queue _coderQueue. MaxConcurrentOperationCount = 1 is a serial queue. This is a good solution to the problem of thread explosion when decoding multiple images asynchronously.