1. Introduction

WWDC 2018 Apple official gave the iOS image processing best practices, this article is mainly on the official document analysis summary and more comprehensive extension.

Image and Graphics Best Practices

2. Basic knowledge

What happens when a local image is displayed on the screen

The code is easy. Two lines

    UIImage *image = [UIImage imageNamed:@"xxxxx"];
    imageView.image = image;
Copy the code

But the actual process of loading images in the middle is as follows

  1. Read raw compressed image data (PNG/JPEG format, etc.) from disk into memory cache
  2. CPU uncompressed image data (imageBuffer)
  3. Render the image (will generate the frameBuffer, frame cache, and finally display on the phone screen)

According to the classic MVC architecture, UIImage plays the role of model, which is responsible for hosting image data, and UIImageView plays the role of View, which is responsible for rendering and displaying images. The system provides interface is very simple, which hides the decoding process.

Buffers

A Buffer is a contiguous area of memory. Let’s look at the image processing related Buffer

Data Buffer

Data Buffer stores metadata of images. Common image formats, such as JPEG and PNG, are compressed image formats. The size of the Data Buffer is the size of the source image on disk.

Image Buffer

An Image Buffer stores the decoded pixel data of the Image, which is also known as a bitmap. Each element in the Buffer describes the color information of a pixel, and the size of the Buffer is positively correlated with the size of the image.

Frame Buffer

The Frame Buffer stores the actual output of each Frame of the app

Similar to the FrameBuffer in OpenGL, Apple does not allow us to render directly to the operating screen. Instead, we put the render data into the frame cache, which is scanned and displayed at 60Hz to 120Hz.

When the app view level changes, UIKit will combine UIWindow and Subviews to render a frame Buffer, and then scan it at a frequency of 60Hz (up to 120Hz on iPad) and display it on the screen.

Decoding operation

UIImage is responsible for decompressing the Data Buffer content and applying for a Buffer (Image Buffer) to store the Image information after decompression. UIImageView copies the Image Buffer to the frameBuffer for on-screen display.

The decompression process consumes a lot of CPU, so UIImage holds the decompressed image data so it can be reused where it needs to be rendered.

Rendering process

In summary we can see the whole process of rendering. It is important to note that the decoded ImageBuffer size is theoretically only related to the image size.

ImageBuffer according to each pixel RGBA four bytes, a 1080p image decoded bitmap size is 1920 * 1080 * 4/1024/1024, about 7.9 MB, while the original image is assumed to be JPG, compression ratio of 1 to 20, about 350kb, As you can see, the memory footprint after decoding is quite large.

Official best practices

Memory occupation will lead to high CPU occupation of our APP, which directly leads to high power consumption and slow app response

DownSampling

When the view is small and the image is large, it will cause unnecessary memory and CPU consumption to directly display the original image. Here, the interface of ImageIO, DownSampling, can be used to generate thumbnails

The specific code is as follows, specify the display area size

There are two caveats here

  • Set kCGImageSourceShouldCache to false, avoid the cache the data after decoding, 64, set the default is the cache (is easy to understand, because next time, when using the image can be different scenarios, the need to generate a thumbnail size is different, obviously can’t do the cache handling)
  • Set kCGImageSourceShouldCacheImmediately to true, when going to render to avoid doing the decoding, the default is false

This thumbnail mode can save a lot of memory and CPU consumption, the official Case given before and after memory comparison

Prefetching && Background decoding

The decoding process is cpu-intensive and blocks on the main thread, so it should be done on the asynchronous thread. The following code

Preloading data on a cell before fetching it (almost any major apps do this, and since iOS10, the introduction of tableView(_:prefetchRowsAt:) makes it easier to fetch data before fetching it.

Using serial queues is a good way to avoid Thread Explosion. Thread switching is very expensive, so in our app we should create a decoder Thread using GCD serial queues.

Official implementation UI instance

We now need to implement the live button below

Let’s start with an irrational implementation

So let’s start by looking at the problem with this,

UIView is the last display of the FrameBuffer created by CALayer. By overriding the drawRect method, Calayer creates a Backing Store, executes draw on the Backing Store, and passes the contents to the frameBuffer for display.

The default size of the Backing Store is proportional to the size of the View. For the iphone6, 750 * 1134 * 4 bytes ≈ 3.4 Mb.

The backing Store has been optimized for iOS 12. The size of the backing Store will change dynamically depending on the color space of the image. Previously, if you used the sRGB format but actually drew the content in a single channel, the size would be larger than it should be, causing unnecessary overhead. IOS 12 will automatically optimize this.

Summarize the problem with drawing using drawRect

    1. The creation of the Backing Store creates unnecessary memory overhead
    1. The UIImage is drawn to the Backing Store and then rendered to the frameBuffer with an extra memory copy in the middle
    1. The BackGroundColor does not need to be drawn to the Backing Store, and is drawn directly to the FrameBuffer using BackGroundColor

Therefore, the correct implementation posture is to split the large view into smaller subviews.

Background color implementation

There’s a rounded corner treatment here

UIView’s maskView and Calayer. maskLayer will both render layers into a temporary image buffer, which is called off-screen rendering, and calayer. cornerRadius will not render off-screen. What really causes the off-screen rendering is setting properties like MaskToBounds. So background image directly use UIView to set the background color.

Here’s an extension of rounded corners. Let’s look at an incorrect way of doing this


override func drawRect(rect: CGRect) {
    let maskPath = UIBezierPath(roundedRect: rect,
                                byRoundingCorners: .AllCorners,
                                cornerRadii: CGSize(width: 5, height: 5))
    let maskLayer = CAShapeLayer()
    maskLayer.frame = self.bounds
    maskLayer.path = maskPath.CGPath
    self.layer.mask = maskLayer
}

Copy the code

First of all, overwriting drawRect will cause unnecessary backing store memory overhead, and the essence of this is to create a mask and mix layers, which will also render off-screen.

With the correct posture, use a CornerRadius for UIView directly, and CoreAnimation will draw the rounded corners for us without creating additional memory overhead.

For UIImageView, you can crop an Image with rounded corners using CoreGraphics itself, as shown in the example code below


extension UIImage {
    func drawRectWithRoundedCorner(radius radius: CGFloat, _ sizetoFit: CGSize) -> UIImage {
        let rect = CGRect(origin: CGPoint(x: 0, y: 0), size: sizetoFit)
        
        UIGraphicsBeginImageContextWithOptions(rect.size, false, UIScreen.mainScreen().scale)
        CGContextAddPath(UIGraphicsGetCurrentContext(),
                         UIBezierPath(roundedRect: rect, byRoundingCorners: UIRectCorner.AllCorners,
                                      cornerRadii: CGSize(width: radius, height: radius)).CGPath)
        CGContextClip(UIGraphicsGetCurrentContext())
        
        self.drawInRect(rect)
        CGContextDrawPath(UIGraphicsGetCurrentContext(), .FillStroke)
        let output = UIGraphicsGetImageFromCurrentImageContext();
        UIGraphicsEndImageContext();
        
        return output
    }
}

Copy the code

Live Image implementation

So you use UIImageView directly, and here’s a trick, if you have a solid color image, and you want to show the same image in different colors, you can use the tintColor property of UIImageView to tile the colors so that you can reuse the image.

The code is as follows:


UIImage.withRenderingMode(_:)
UIImageView.tintColor

Copy the code

The text to achieve

Using UILabel for text can reduce the Backing Store overhead by 75 percent. The system is optimized for UILabel and automatically updates the Backing Store size for emoji and rich text content.

Finally realize

The final correct implementation of the Live button is shown below

Image Assets are recommended

  • Optimized search efficiency based on name and effects, faster search images
  • Memory management is also optimized at runtime
  • App Slicing App Slicing In iOS 9, images supported by devices will be saved from Image Assets (2x or 3X)
  • Preserve Vector Data after iOS 11. Support vector graph function, amplification will not distortion

Advanced Image Effects

The CoreImage framework is recommended for real-time processing of images. For example, to adjust the gray value of a picture, some partners may use CoreGraphics to obtain the data of each pixel of the image, then change the gray value, and finally generate the target icon. This practice puts a lot of work that GPU is good at on THE CPU. The reasonable approach is as follows: Use the Filter of CoreImage or the shader of Metal or OpenGL to let the GPU do the image processing.

Drawing Off-Screen

For off-screen recommended alternative UIGraphicsBeginImageContext UIGraphicsImageRenderer rendering of the scene, better performance, and support the wide color gamut.

4. Expand and think

Expand by asking questions and think deeply about each question

Problem 1: there are so many details in the image display, but why do you not feel in the normal development, where can you optimize your project?

A: Most of the local images are loaded using AN API like UIImage imageNamed, while web images are loaded using frameworks like SDWebImage or YYWebImage. So I didn’t go into it.

And then it goes on

Question 2: Using imageNamed, when does the system decode, does it cache, what is the size of the cache, does it have performance problems, and what is the difference between imageWithContentsOfFile

Answer this question first

  1. First of all, let’s say what’s the difference between imageNamed and image with content file, which is pretty clear to most of your friends, because it’s also a cliche in an interview. So imageNamed loads the local image and it’s going to cache the image, so it’s going to load a thousand of the same local image, and it’s only going to have one in memory, and the image with Content file doesn’t cache, so it’s going to load the same image repeatedly, so it’s going to have multiple images in memory.
  2. ImageNamed will load the image source data and decoded data into the memory cache, only when the memory warning will be released, interested partners can debug.
  3. About when UIImage object to decode, in fact, just when we were in reducing sampling has been mentioned, kCGImageSourceShouldCacheImmediately system default is false, Could we see the ImageIO/CGImageSource. H file kCGImageSourceShouldCache annotations

pecifies whether image decoding and caching should happen at image creation time. The value of this key must be a CFBooleanRef. The default value is kCFBooleanFalse (image decoding will happen at rendering time).

So UIImage is only decoded when rendering on screen. And the operation on UIImageView must be on the main thread, the decoding is on the main thread. So if you’re constantly creating large UIImage renderings in tableView slides, you’re going to block the main thread.

Summary: imageNamed comes with caching by default, and caching is implemented via NSCache. ImageWithContentsOfFile is not cached. It is suitable for loading large images that are not commonly used. Since the default main thread decoded UIImage, imageNamed is only suitable for loading small ICONS such as each TAB of APP. Images that need to be displayed on the front screen. It is not suitable for local loading of a large number of web images downloaded by sliding. The main thread will block.

5. Correct way of loading network pictures

In fact, SDWebImage or YYWebImage and other frameworks have given the correct posture, details can pick one of them to read the source code.

Share excellent source code parsing

YImage design ideas, details analysis

YYWebImage source code analysis: thread processing and caching strategy

Downloading pictures mainly simplifies the process as follows

  1. Download image source data from the network and place it in memory and disk cache by default
  2. Asynchronous decoding, decoded data into the memory cache
  3. Callback main thread render image
  4. Internally maintains the cache for disks and memory, and supports periodic expiration clearing and memory cache upper limit

The main simplification process for loading images is as follows

  1. Search image data from memory, if there is and has been decoded, directly return data, if not decoded, asynchronously decode cache memory after return
  2. If no image data is found in the memory, search from the disk. After the disk finds the image source data, load the image source data into the memory, asynchronously decode the cache memory and return. If the image is not downloaded from the network. Step up the above process.

Analysis:

  • So the drop process solves the problem that UIImage imageNamed must be loaded on the main thread to decode the image, asynchronously loading, avoiding the main thread blocking.
  • By caching memory, frequent DISK I/O is avoided
  • By caching decoded image data, CPU consumption of frequent decoding is avoided.

6. Processing of oversized pictures

We have analyzed the memory size after decoding 1080p images, which is about 7.9MB. If it is 4K or 8K images, this memory will be very large. If the default decoding cache technology scheme of SDWebImage or YYWebImage is used to load multiple such large images, the result will be memory explosion. Flash back.

You can set the Option of SDWebImage or YYWebImage not to decode the downloaded image

So what do we do with the big picture, where we have two scenarios

  1. A large image is loaded onto a small view

Solution: Use thumbnail DownSampling scheme recommended by Apple

  1. Like wechat, Weibo long picture details, load the large picture in full screen, drag to view the details of pictures in different locations

Solution: Use Apple’s CATiledLayer to load. The principle is fragment rendering, sliding by specifying the target location, by mapping the original image of the specified location of part of the picture data decoding rendering. Here is no longer too tired to mention, interested partners can understand the official API.

7. To summarize

It is very necessary to understand the details and the whole process of image loading, which helps us to choose the appropriate scheme in normal development and make reasonable performance optimization.