preface

What can we do about page performance optimization? After a moment’s thought, I can think of the following:

  • Optimized layout calculation

  • Asynchronously perform time-consuming operations

  • Fix images (images are the biggest memory bottleneck for apps)

  • Avoid off-screen rendering

Optimized layout calculation

With auto Layout, the most straightforward optimization is to calculate the frame using a manual layout.

The principle of automatic layout is to create an object engine bound to view, use engine to record relevant constraint information, and insert relevant parameters into the layout calculation to calculate the frame.

If you can remove this step, you can definitely save performance.

Although Apple has optimized Auto Layout since iOS 12, the efficiency of the optimization is not much different from that of manual layout. But we still have a lot of users under system 12.

Therefore, optimization can be considered.

I haven’t had any performance issues with Auto Layout yet.

Asynchronous time-consuming operation

Picture decoding operation

Why do images need to be decoded?

First of all, the local image in our project and the loaded net image are compressed binary data (commonly used JPG, PNG).

When displayed on the interface, the binary data needs to be drawn to the corresponding “canvas”.

The process of drawing is called decoding.

Asynchronous picture decoding

By default, the system will decode UIImage on the main thread when it’s loaded onto UIImageView or CALayer’s content.

This decoding operation is time consuming and can cause a lag problem if not handled, so it needs to be executed asynchronously in a child thread.

The tripartite frameworks we use all have this kind of processing.

For example, SDWebImage creates a serial queue that performs decoding asynchronously.

Through SDWebImageAvoidDecodeImage parameters can control whether decoding operation.

But here’s the puzzle: why not use concurrent queues? Using a serial queue means that the image decoding operations have to be performed sequentially, which is a bit inefficient, isn’t it?

Cache highly

For sliding lists of different cells, caching can be used to avoid multiple calculations and achieve the purpose of efficiency improvement.

The simpler thing to do is to use a data model to hold this information.

You can also use a tripartite framework: UITableView+FDTemplateLayoutCell

Solving image problems

Image memory problem

Width * height * Memory size per pixel (typically 4 bytes)

A graph, if the resolution is large, can easily cause a large memory problem

When there are multiple images on the page, this memory will explode.

Avoid unnecessary decoding operations

SDWebImage decodes images by default when they are loaded.

I randomly found three pictures on the Internet and used SDWebImageManager to load them. After the decoding operation is enabled and disabled, the memory occupies 10M when the decoding operation is enabled. When the decoding operation is turned off, 5M memory is used.

When using a third-party library, if only images are preloaded, consider not decoding them.

DownSampling

DownSampling is to specify the size during Decode, Decode only part of the data, and reduce memory usage.

For example, I have a control size of 100 by 100, but the original image might be 300 by 300. After using DownSampling, only a small amount of data needs to be decoded to achieve the desired result.

The SDWebImage is already supported. You just need to set the image size to be the same as the control size when loading the image using the context parameter.

@{SDWebImageContextImageThumbnailPixelSize:@(size)}
Copy the code

After my experiment, the effect is still very obvious.

I found six big pictures online.

In the absence of DownSampling, 25M is consumed for loading 6 images

However, when DownSampling is used, the specified size is 100*100, and the memory is directly reduced to 5M

Currently, it is applied to the homepage Feeds stream in the project and the effect is very significant. In the case of loading less than three screens (about 4-6 images per screen), all were saved40MThe left and right sides.

Note that the size passed in when using should take into account the hd screen coefficient.

Therefore, when we build the interface, try to load and control the same size of the graph, otherwise you may see only a small graph, actually take up a lot of memory, but also need the CPU to help you to do some compression, cropping work.

It is recommended to useImage AssetsTo manage

  • Find fast

    If you’re putting it in a bundle, it’s the folder that you need to walk through the bundle. But the search for Image Assets was optimized.

  • Smarter caching strategies

  • It can reduce the package volume after splitting

  • Support image stretching and other features.

To reduceBacking StoreThe use of

A: What is the backing store?

The region that CALayer’s contents are pointing to.

If the draw function is used, CALayer will create a Backing store of the same size as the View, perform the draw operation on it, and commit it to the Frame Buffer for rendering.

This step causes memory consumption.

Because UILabel optimizes the Backing Store for monochrome strings, saving 75% of the Backing Store and automatically updating the size of the Backing Store to fit rich text or emojis.

For some complex view styles, multiple subviews can be combined to achieve, minimize the operation of DRAW, you can reduce the memory consumption of this part.

Pictures of reuse

For solid Color images, try to reuse one image and use Tint Color for different renderings. Achieve the purpose of reuse pictures.

For frequently used images, you can use [UIImage imageWithNamed:@””] to make use of system-level caching to improve efficiency and reduce memory.

For images that are not frequently used, you are advised to load them by reading files directly. After using them, they will be released automatically to reduce memory.

Off-screen rendering

What is off-screen rendering?

Normally, the rendering process is performed at a rate of 60FPS or 120FPS.

During each screen rendering cycle, the system takes rendered data from the frame buffer and renders it to the screen.

However, due to layers or other factors, it is not possible to render directly on the screen, so it is necessary to create a space outside the screen for composing frame data.

This is called off-screen rendering.

Disadvantages of off-screen rendering

Off-screen rendering is bad because:

1. An extra space was created, and memory was increased

2. Switching environments is a huge sacrifice

It is easy to occur during the rendering cycle that the data is not rendered well, thus causing stuttering problems.

The way it causes an off-screen rendering

For off-screen rendering, the actual development is basically:

  • Rounded corners and clipping

  • Set the mask of the layer

  • Set the shadow

  • rasterizer

  • anti-aliasing

Resolve off-screen rendering

For off-screen rendering caused by setting shadows, the solution is to use bezier curves to draw a good path, which will solve the problem.

Here I want to introduce more about the optimization of rounded corners.

Rounded corner scheme for UIImageView

At the beginning, MY idea was the same as the online solution, which was:

The child thread will be used to corner the picture, while the cache down the picture, and then asynchronous to the main thread to use the picture. This solves the off-screen rendering problem with rounded corners.

But in the actual operation process, I think this step is still problematic.

Problem 1: Memory usage increases

It’s impossible to keep rounding images in a slide list.

Otherwise, you’re constantly consuming CPU to round corners and switching threads.

Therefore, we need to use space for time and cache the rounded corners.

Here’s the problem: an image, loaded and stored by the network framework, now has a rounded image

That basically means there’s a double in memory usage.

Problem # 2: It’s easy to implement features, but hard to write well

Although there are many demos on the Internet, they are not well written in my opinion.

  • First of all, it is brutally drawn in a global concurrent queue, with no consideration for thread consumption and safety.

  • Secondly, there is no perfect anti-reuse logic.

    We usually use a sliding view, in which the controls on the cell are reused. If you don’t remove the previous image fetch operation like SDWebImage, how do you make sure it doesn’t cause problems when you reuse it?

  • Cumbersome to use

    If you implement a set of logic to get images, you will find that the code is much more convenient than using SDWebImage classification. (Of course we can also use a category to do this ourselves.)

All of these problems can be supported by extending SDWebImage.

But it takes time and effort, so if you have a chance in the future, try it.

I think the best solution for rounded corners

Simple verification has been carried out on existing machines (iPhone 11). After off-screen rendering is solved by asynchronously drawing rounded corner pictures, it is found by the analysis of instrument that neither CPU nor GPU has a significant benefit change.

If you have to optimize, having the designer cut a mask over it is the best solution IN my opinion.

After all, as mentioned earlier, Apple thinks the way they combine subViews is much better than the way they draw them themselves.

Reference links:

Github.com/seedante/iO…

www.jianshu.com/p/a6bfaf1e0…