The background that

UIView is one of the most basic UI controls for iOS development. Almost all display controls inherit from UIView. Through different types of UIView, we can display text, pictures and so on to the screen, so to understand the drawing principle of UIView, It is of great help for subsequent custom drawing and performance optimization.

Image drawing shows the schematic diagram of the process

Take Hello World as an example. The entire drawing and Display process is as shown in the figure above. At the CPU level, 1 Layout is responsible for UI Layout, text calculation, 2 Display, 3 Prepare, image codec, and 4 Commit. Submit bitmap, GPU layer, mainly render pipeline, including vertex shader pixel assembly raster fragment shader fragment processing FrameBuffer.

Next, let’s break down the process in detail.

UIView and CALayer

1 UIView inherits UIResponder and can respond to events. It holds a CALayer member layer inside and has signed the CALayerDelegate protocol.

2. CALayer inherits NSObject and is responsible for drawing UIView content, while Layer draws a bitmap into its backing store.

Most of the View’s graphic properties are handled directly by the View’s Layer properties, such as frame, bounds, backgroundColor, etc.

The backing Store is actually a bitmap of UIView type, which is determined by the contents of CALayer.

5 UIView provides its content and is responsible for handling events such as touch and participating in response chain, while CALayer is responsible for displaying content contents, which reflects the design principle of single responsibility.

What is a CALayer?

An official document is defined as an object that manages image-based content and allows you to animate that content. It is usually used to provide backup storage for views, but can also be used to display content without a view.

The main job of Layer is to manage the visual content you provide, but Layer itself can set visual properties (such as background color, borders, and shadows).

In addition to managing visual content, this Layer also maintains information about content geometry (such as position, size, and transformation) that is used to display that content on the screen.

Why not just use a UIView or CALayer to handle all events?

There are two main considerations:

1 Different responsibilities

The main job of UIVIew is to receive and respond to events; CALayer’s primary responsibility is to display the UI, which embodies the single-responsibility design principle.

2 need to reuse

On macOS and App systems, although NSView and UIView have similar behaviors, they are significantly different in implementation, but both rely on CALayer. In this case, only one CALayer can be encapsulated.

CALayer’s display foundation

There’s a very important property in CALayer called contents, which provides what to display, defined as follows

/* An object providing the contents of the layer, typically a CGImageRef * or an IOSurfaceRef, but may be something else. (For example, NSImage * objects are supported on Mac OS X 10.6 and later.) Default value is nil. * Animatable properties and methods. **/ open var contents: Any?Copy the code

Contents is a layer content property and method. Contents is the object that provides the layer content, usually CGImageRef or IOSurfaceRef, but it can also be something else. (For example, Mac OS X 10.6 and later supports NSImage objects.) The default is nil.

In effect, the contents property holds the bitmap rendered by the device rendering pipeline (often referred to as the backing Store), and when the device screen is refreshed, the generated bitmap is read from the CALayer and rendered to the screen.

Because the content to be rendered is stored statically each time, the Core Animation triggers the drawRect: method to use the stored bitmap for a new display.

Note: If it is a View layer, avoid setting the content of this property directly. The interaction between the view and the layer usually causes the view to replace the contents of this property during subsequent updates.

CALayer’s Layer tree

UIView and CALayer have their own tree structure, and they can have their own SubView and SubLayer

There are three Layer trees in iOS

1 Layer tree(Model tree)

Is the model information of each tree node, such as the common frame, affineTransform, backgroundColor and so on. These model data can be configured and set in APP development. Any changes we make to the View /layer are reflected in the Model-tree.

2 Presentation tree

This is an intermediate layer, which our App cannot operate actively. This layer content is generated by iOS system in Render Server! The intermediate states of CAAnimation change their properties at this layer to complete the animation’s actions.

3 render tree

This corresponds directly to the tree submitted to render Server for display.

The display of the three Layer trees is as follows. The final contents to be submitted to the Render server are all in the Model-tree, including the relevant parameters of the Animation.

UIView drawing principle

Through the above introduction, we know that the display of images is managed by Layer, but the display of contents is not realized by directly operating Layer. Although it can be realized by directly setting the contents property of Layer, it is quite troublesome. For example, if we want to display Hello world on the screen, we will choose UILabel to display it, which can be realized quickly by setting the corresponding text attribute. UILabel also inherits UIView and internally calls some related methods of Layer to realize the drawing. The drawing flow chart is as follows

The drawing process diagram is summarized as follows

1 When [UIView setNeedsDisplay] is called, the layer setNeedsDisplay method of the same name is actually called.

2 The Core Animation then catches the layer-tree changes, submits a CATransaction, and fires the Observer callback of Runloop, where [CALayer display] is called for the actual drawing process of the current view

3 [CALayer display] determines whether the layer’s delegate will respond to the displayLayer: method. If it does not, it will enter the system drawing process. And if it responds, it actually gives us a way to draw asynchronously, which gives us some leeway to draw asynchronously, and we can do asynchronous drawing here, and we’ll talk about how to do that later.

System drawing process

The essence is to create a backing storage process

1 When the [CALayer display] method is called, determine whether there is a delegate to implement the drawing method, if not, trigger the system drawing.

2 During system rendering, the backing storage(CGContextRef) will be created first. Notice that every layer has a context that points to a cache called the backing storeage.

Delegate – (void)drawLayer:(CALayer *)layer inContext:(CGContextRef) CTX (the created CGContextRef is passed in by default) Otherwise, call -[CALayer drawInContext:], and then call [UIView drawRect:]. Now in CGContextRef, If passed in drawRect UIGraphicsGetCurrentContext () get to is the CGContextRef CALayer create.

The backing store is loaded with data from the drawRect method. The backing store is loaded with data from the drawRect method. The backing Store bitmap data will be displayed on the screen by rend Server handing it to the GPU for rendering.

Note: Every UIView Layer has a Backing Store for its Backing Content, which is actually a CGImage data, or bitmap data, for the GPU to read and display.

DrawRect process combing

DrawRect method will be called in the system drawing process, and in the development stage, it is also the drawRect method that we deal with most, so here we will review its calling process in extra detail.

1 When we call [UIView setNeedsLayout], the bottom layer will call [CALayer setNeedsLayout], and it will add a dirty mark to the layer, but still show the original content. It doesn’t actually do any work, so calling -setNeedsdisplay multiple times doesn’t hurt performance.

2 then fires the [CALayer display] method.

3. CALayer creates a CGContextRef, creates a backing store, and pushes CGContextRef into the Graphics Context stack(so CGContextRef can be nested). When we call UIKit’s UIRectFill() API, it automatically puts the drawing result in CGContextRef at the top of the stack, We can also call UIGraphicsGetCurrent directly to get the CGContextRef at the top of the current Grahics Context stack.

4 Then the drawRect method is executed, and the drawing content is in the backing storage of CGContextRef.

5 The back storage will be stored in the properties associated with the Layer-model-tree and submitted to the Render server at commit time.

Special Scene — UIImageView

When we use UIImageView, the View still has a CALayer, but it will use CGImageRef(UIImage) directly, and the image in the UIImage that we pass to UIImageView may not be decoded, There is a prepare process before the CA Commit, so you can see the following call stack in prepare, the third step of the CA-Transaction:

  1. CA::Layer::prepare_commit
  2. Render::prepare_image
  3. Render::copy_image
  4. Render::create_image
  5. . decodeImage

Note: UIImage is actually a lightweight encapsulation of CGImage, so UIImage objects in UIImageView can directly use their OWN CGImage data as CALayer’s Content, without the need to recreate CGContetRef.

Asynchronous drawing process

The reference code is as follows

/** Maintain thread-safe draw state */ @property (atomic, assign) ADLayerStatus status; - (void)setNeedsDisplay {// When a new drawing request is received, synchronize the drawing thread this time cancel self.status = ADLayerStatusCancel; [super setNeedsDisplay]; } - (void)display {// Mark drawing self.status = ADLayerStatusDrawing; if ([self.delegate respondsToSelector:@selector(asyncDrawLayer:inContext:canceled:)]) { [self asyncDraw]; } else { [super display]; } } - (void)asyncDraw { __block ADQueue *q = [[ADManager shareInstance] ad_getExecuteTaskQueue]; __block id<ADLayerDelegate> delegate = (id<ADLayerDelegate>)self.delegate; Dispatch_async (q.queue, ^{// Cancel if ([self Canceled]) {[[ADManager shareInstance] ad_finishTask:q]; return; } // Generate context CGSize size = self.bounds.size; BOOL opaque = self.opaque; CGFloat scale = [UIScreen mainScreen].scale; CGColorRef backgroundColor = (opaque && self.backgroundColor) ? CGColorRetain(self.backgroundColor) : NULL; UIGraphicsBeginImageContextWithOptions(size, opaque, scale); CGContextRef context = UIGraphicsGetCurrentContext(); if (opaque && context) { CGContextSaveGState(context); { if (! backgroundColor || CGColorGetAlpha(backgroundColor) < 1) { CGContextSetFillColorWithColor(context, [UIColor whiteColor].CGColor); CGContextAddRect(context, CGRectMake(0, 0, size.width * scale, size.height * scale)); CGContextFillPath(context); } if (backgroundColor) { CGContextSetFillColorWithColor(context, backgroundColor); CGContextAddRect(context, CGRectMake(0, 0, size.width * scale, size.height * scale)); CGContextFillPath(context); } } CGContextRestoreGState(context); CGColorRelease(backgroundColor); } else { CGColorRelease(backgroundColor); } // Use context to draw [delegate asyncDrawLayer:self inContext:context Canceled :[self canceled]]; // Cancel if ([self Canceled]) {[[ADManager shareInstance] ad_finishTask:q]; UIGraphicsEndImageContext(); return; } / / get the image UIImage * image = UIGraphicsGetImageFromCurrentImageContext (); UIGraphicsEndImageContext(); [[ADManager shareInstance] ad_finishTask:q]; // Cancel if ([self canceled]) {return; } // dispatch_async(dispatch_get_main_queue(), ^{self.contents = (__bridge id)(image.cgimage); }); }); }Copy the code

CALayerDelegate method resolution

As we’ve seen above, both system drawing and asynchronous drawing are related to methods in CALayerDelegate. Let’s take a look at methods in CALayerDelegate

/** Delegate methods. **/ public protocol CALayerDelegate : NSObjectProtocol { /* If defined, called by the default implementation of the -display * method, in which case it should implement the entire display * process (typically by setting the `contents' property). */ @available(iOS 2.0, *) optional func display(_ layer: CALayer) /* If defined, called by the default implementation of -drawInContext: */ @available(iOS 2.0, *) optional func draw(_ Layer: CALayer, in CTX: CGContext) /* If defined, called by the default implementation of the -display method. * Allows the delegate to configure any layer state affecting contents prior * to -drawLayer:InContext: such as `contentsFormat' and `opaque'. It will not * be called if the delegate implements -displayLayer. */ @ Available (iOS 10.0, *) optional func layerWillDraw(_ layer: CALayer) /* Called by the default -layoutSublayers implementation before the layout * manager is checked. Note that if The delegate method is invoked, the * layout manager will be ignored. */ @available(iOS 2.0, *) optional func layoutSublayers(of layer: CALayer) /* If defined, called by the default implementation of the * -actionForKey: method. Should return an object implementing the * CAAction protocol. May return 'nil' if the delegate doesn't specify *  a behavior for the current event. Returning the null object (i.e. * '[NSNull null]') explicitly forces no further search. (I.e. the * +defaultActionForKey: Method will not be called.) */ @available(iOS 2.0, *) optional func action(for Layer: CALayer, forKey event: String) -> CAAction? }Copy the code

func display(_ layer: CALayer)

This method is called when a layer marks its contents as needing to be updated (setNeedsDisplay()). For example, set the contents property for the layer

private lazy var delegate = LayerDelegate() private lazy var sublayer: CALayer = {let layer = CALayer() layer.delegate = self.delegate return layer}() 'sublayer.display(_:)' is called. class LayerDelegate: NSObject, CALayerDelegate { func display(_ layer: CALayer) { layer.contents = UIImage(named: "rabbit.png")? .cgImage } }Copy the code

func draw(_ layer: CALayer, in ctx: CGContext)

Same as display(_:), but you can use the layer’s CGContext to implement the display process

// sublayer.setNeedsDisplay()
class LayerDelegate: NSObject, CALayerDelegate {
    func draw(_ layer: CALayer, in ctx: CGContext) {
        ctx.addEllipse(in: ctx.boundingBoxOfClipPath)
        ctx.strokePath()
    }
}
Copy the code

Draw (_ rect: CGRect)

1 first call Layer draw(_:in:);

2 Then create and configure the drawing environment in the super.draw(_:in:) method;

3 Call the view’s draw(:) method from the Layer super.draw(:in:).

/// Note: This method does nothing by default. It does not matter whether super.draw(_:) is called or not. override func draw(_ rect: CGRect) { print(#function) } override func draw(_ layer: CALayer, in ctx: CGContext) { print(#function) } // Prints "draw(_:in:)"Copy the code
/// Note: This method does nothing by default. It does not matter whether super.draw(_:) is called or not. override func draw(_ rect: CGRect) { print(#function) } override func draw(_ layer: CALayer, in ctx: CGContext) { print(#function) super.draw(layer, in: ctx) } // Prints "draw(_:in:)" // Prints "draw"Copy the code

Note: The Layer display(:) or draw(_ :in:) methods are automatically called only when the system detects that the View’s draw(:) method has been implemented. Otherwise it must be called manually by calling the layer’s setNeedsDisplay() method.

func layerWillDraw(_ layer: CALayer)

Called before draw(_ Layer: CALayer, in CTX: CGContext), you can use this method to configure the state of any layers that affect the content (such as contentsFormat and isOpaque).

func layoutSublayers(of layer: CALayer)

Similar to UIView layoutSubviews(). This method is called when a boundary is found to have changed and its sublayers may need to be rearranged (for example, by resizing a frame).

func action(for layer: CALayer, forKey event: String) -> CAAction?

CALayer is able to perform animations because it is defined in the Core Animation framework and is the Core of the operations that Core Animation performs.

In other words, CALayer is not only responsible for displaying content, but also for performing animations (in fact, the operations between Core Animation and hardware are performed, and CALayer is responsible for storing the data needed for the operations, similar to Model).

Therefore, most properties that use CALayer are attached to animations. But in UIView, this effect is turned off by default. You can turn it back on using its layer delegate method (also automatically turned on in the View Animation block) and return the object that determines its animation effect. If it returns nil, the default implicit animation effect will be used.

Example – Using layer’s delegate method returns a basic animation of moving an object from left to right

final class CustomView: UIView {
    override func action(for layer: CALayer, forKey event: String) -> CAAction? {
        guard event == "moveRight" else {
            return super.action(for: layer, forKey: event)
        }
        let animation = CABasicAnimation()
        animation.valueFunction = CAValueFunction(name: .translateX)
        animation.fromValue = 1
        animation.toValue = 300
        animation.duration = 2
        return animation
    }
}

let view = CustomView(frame: CGRect(x: 44, y: 44, width: UIScreen.width - 88, height: 300))
view.backgroundColor = .orange
self.view.addSubview(view)

let action = view.layer.action(forKey: "moveRight")
action?.run(forKey: "transform", object: view.layer, arguments: nil)
Copy the code

The above is the principle and process of UIView rendering. Next, it analyzes the principle of UIView rendering and display.

Rendering principle

Graphics rendering mainly uses GPU parallel computing capability to achieve graphics rendering and display on every pixel of the screen.

The most common rendering process is rasterization, the process of converting data into visible pixels. GPU and related drivers realize OpenGL and DirectX model of graphics processing. In fact, OpenGL is not a function API but a standard, which defines the related function API and its functions. The specific function library is realized by a third party, usually provided by the graphics card manufacturer.

GPU architecture model

GPU contains several processing cores to realize concurrent execution, and it uses two-level cache (L1 and L2 cache) internally. Its architectural model with CPU includes the following two forms: separated and coupled

1 separate structure

The CPU and GPU have their own storage systems, which are connected through the PCI-E bus.

The disadvantage of this structure lies in that PCI-E has low bandwidth and high latency compared with pCI-E and PCI-E, and data transmission becomes the performance bottleneck. At present, it is widely used, such as PC and smart phone.

2. Coupling structure

The CPU and GPU share memory and cache. AMD’s APUS use this structure and are currently used in game consoles such as the PS4.

GPU rendering process

Mainly include:

Vertex shader

Contains 3D coordinate system conversion, each vertex property values set

2 shape (pixel) assembly

Form the basic shapes

Geometry shaders

Construct new vertices to form other shapes, such as another triangle shown above

4 rasterizer

The shape is mapped to the corresponding pixel on the screen to generate a fragment that contains all the data of the pixel structure

5 fragment shaders

Discard pixels that are outside the view and color them

6 Test and mix

Determine whether to discard and blend pixels by determining their position, such as whether they are behind other pixels and their transparency

texture

More vertices and color attributes are required to make graphics more realistic, which increases performance overhead. To improve production and execution efficiency, textures are often used to represent details.

A texture is a 2D image (and even 1D and 3D textures), and a texture can normally be used directly as input to the fifth stage of the graphics rendering pipeline (the fragment shader).

Graphics rendering technology stack

App uses Core Graphics, Core Animation, Core Image and other frameworks to draw visual content, and these software frameworks are also dependent on each other.

All these frameworks need to call GPU for drawing through OpenGL, and finally display the content on the screen, as shown in the following figure

The framework is introduced

1 UIKit

It does not have the ability to image on the screen itself, but is mainly responsible for the response to user operation events (UIView inherits from UIResponder), and the transmission of event response is basically realized through the view tree traversal layer by layer.

2 Core Animation

Is a composite engine whose job is to combine different visual elements on the screen as quickly as possible. These visual elements can be broken down into separate layers (calayers), which are stored in a system called layer trees. In essence, CALayer is the foundation of everything a user can see on screen.

3 Core Graphics

Based on the Quartz advanced graphics engine, mainly used for drawing images at runtime. Developers can use this framework to handle path-based drawing, transformation, color management, off-screen rendering, patterns, gradients and shadows, image data management, image creation and image masking, and PDF document creation, display and analysis.

4 Core Image

As opposed to Core Graphics, which is used to create images at run time, Core Image is used to process images created before run. The Core Image framework has a series of ready-made Image filters to efficiently process existing images.

5 OpenGL(ES)

OpenGL for Embedded Systems, or GLES, is a subset of OpenGL.

6 Metal

Similar to OpenGL ES, it is also a set of third-party standards implemented by Apple. Most developers don’t use Metal directly, but virtually all developers use Metal indirectly. Core Animation, Core Image, SceneKit, SpriteKit and other rendering frameworks are all built on top of Metal. When you debug the OpenGL program on a real machine, the console prints a log of Metal being enabled. From this point you can assume that Apple has implemented a mechanism to seamlessly bridge OpenGL commands to Metal and let Metal do the actual hardware interaction.

Render Server

In iOS apps are not responsible for rendering but for a dedicated rendering process called Render Server

The main processing process is as follows:

1. Handle Events by App, such as user click operations. During this process, App may need to update view tree, and accordingly, layer tree will also be updated.

2 App completes the calculation of display content through CPU, such as view creation, layout calculation, picture decoding, text drawing, etc. After evaluating the display, the App packages the layers and sends them to Render Server on the next RunLoop, completing a Commit Transaction.

Commit transcation can be broken down into the following steps:

A) Layout, mainly for view construction, including the overload of LayoutSubviews method, addSubview method to fill subviews, etc.

B) Display, mainly for view rendering. Here, it only sets the graph metadata to be imaged most. The drawRect: method of an overloaded view can customize the display of UIView by drawing a host diagram inside the drawRect: method, which uses CPU and memory.

C) Prepare, which is an additional step, generally deals with image decoding and conversion.

D) Commit, mainly packaging layers and sending them to Render Server via IPC. This is done recursively, because layers and views exist in a tree structure.

3 Render Server performs OpenGL, Core Graphics related operations, such as calculating the intermediate value of animation layer properties according to various attributes of the layer (if it is an animation property, it will calculate the intermediate value of animation layer properties) and prepare the rendering with OpenGL.

4 The GPU renders the layer to the screen using related components such as the Frame Buffer and video controller.

To meet the screen refresh rate of 60FPS, RunLoop operations should not be performed more than 16.67ms apart, and the above steps need to be performed in parallel.

Apply colours to a drawing with the RunLoop

1 the iOS display system is driven by the VSync signal, which is generated by the hardware clock and is emitted 60 times per second (this value depends on the hardware, such as 59.97 on the iPhone).

2 After iOS graphics service receives VSync signal, it will notify App through IPC.

3 App Runloop will register the corresponding CFRunLoopSource to receive the clock signal notification through mach_port after startup, and then the Source callback will drive the animation and display of the whole Ap.

Note: It is observed that the relevant Vsync-related Source is not registered after App startup, so the above application should be the Render Server process registered Source to listen to VSync signal to drive the layer rendering, and then submit to the GPU.

4 Core Animation registers an Observer in the RunLoop that listens for BeforeWaiting and Exit events. This Observer has a priority of 2,000,000, which is lower than other common observers.

5 When a touch event arrives, RunLoop is awakened, and the code in the App performs some operations, such as creating and adjusting the view hierarchy, setting the FRAME of UIView, modifying the transparency of CALayer, and adding an animation to the view. These operations are eventually captured by CALayer and submitted to an intermediate state via CATransaction.

6 Observers are notified when the RunLoop is about to go to sleep (or exit) after all the above operations have been completed. In this case, the Observer registered by Core Animation will merge all intermediate states and submit them to the GPU for display in a callback.

7 if there is Animation, Core Animation will trigger the relevant process several times through CADisplayLink and other mechanisms.

According to the principle of

Take playing video on the screen as an example, the screen graphic display structure is shown in the figure below

1 The CPU submits graphics data to the GPU over the BUS

2 The GPU is rendered into a sequence of frames of data and submitted to the frame buffer

3 The video controller will read the data of the frame buffer frame by frame through the VSync signal and submit it to the screen controller and finally display it on the screen.

Double buffering mechanism

To solve a frame buffer efficiency problem (reading and writing is an inefficient concurrent processing), a double-buffering mechanism is adopted. In this case, the GPU will pre-render a frame into a buffer for the video controller to read. After the next frame is rendered, the GPU will directly point the pointer of the video controller to the second buffer, as shown below:

Double-buffering mechanism while improve the efficiency but also introduced the torn picture problem, namely when the video controller has not read the complete, the screen contents had shown a half hour, the GPU will be a new frame content submitted to the frame buffer and exchange, the two buffers for video controller will be put in the second half of a new frame of data displayed on the screen, screen tear phenomenon

Vertical synchronization (V-sync)

In order to solve the problem of image tearing, the GPU usually has a mechanism called V-sync. When VSync is enabled, the GPU will wait for the VSync signal from the display before performing a new frame rendering and buffer update. This will solve the problem of tearing and increase the smoothness of the picture, but it will consume more computing resources and cause some latency.

The page freezes and drops frames

After the arrival of VSync signal, the graphics service of the system will notify the App through CADisplayLink and other mechanisms, and the main thread of the App will start to calculate the display content in the CPU, such as view creation, layout calculation, picture decoding, text drawing, etc. Then THE CPU will submit the calculated content to the GPU, which will transform, synthesize and render. The GPU then submits the rendering result to the frame buffer and waits for the next VSync signal to be displayed on the screen. Due to the VSync mechanism, if the CPU or GPU does not complete the content submission within a VSync period, the frame will be discarded and displayed at the next opportunity, while the display will keep the previous content unchanged. That’s why the interface gets stuck.

Performance optimization

Through the interpretation of the above, we know the UIView drawing to show the whole process, but in actual development, probably because some improper operation, cause the entire process takes, appear even frame drop caton, at this time, we can do it through the following several aspects to performance optimization, to ensure that the page displays scrolling smoothness.

The CPU level

At the CPU level, resource consumption can be reduced from the following aspects

1 Object Creation

Object creation allocates memory, adjusts properties, and even reads files (such as creating UIViewController to read XIB files), which are CPU intensive. Therefore, try to use lightweight objects instead of heavy objects, such as

  1. CALayer doesn’t need to respond to touch events more than UIView does.

  2. If the object does not involve UI operations, try to put them into background threads.

  3. For performance-sensitive view objects, try to use code creation rather than Storyboard creation.

  4. If objects can be reused, they can be reused using a cache pool.

2 Object Adjustment

Such as CALayer property modification, view hierarchy adjustment, adding and removing views, etc.

Note: There is no attribute method in CALayer. The runtime dynamically receives the resoleInstanceMethod method to temporarily add a method to an object, and stores the corresponding attribute value in a Dictionary. At the same time, it will notify the delegate and create an animation. UIView properties related to display (such as frame/bounds/ Transform) are actually mapped from CALayer properties.

3 Object Destruction

Although object destruction destruction resources are not large, they can not be ignored. Usually when a container class holds a large number of objects, the resource cost of destruction is obvious, so you can see that objects that are being freed by the background thread move to the background thread.

4 Layout Calculation

View layout calculation is the most common use of CPU resources and its final implementation by UIView. The frame/bounds/center on the adjustment of the property, therefore, to avoid the CPU resource consumption to calculate in advance good layout, one-time adjust corresponding properties, when there is a need for Don’t evaluate and adjust these attributes multiple times and frequently.

5 Autolayout

Auotlayout is a technology advocated by Apple that can improve development efficiency in most cases, but it can cause serious performance problems for complex views, so use code to implement views with high performance requirements.

6 Text computing

If a page contains a lot of text, text-width and height calculations take up a lot of resources and are unavoidable. Can through the rich text NSAttributedString [NSAttributedString boundingRectWithSize: options: context] method to calculate the text width is high, With [NSAttributeString drawWithRect: options: context:] to draw text, and on background threads execute avoid blocking the main thread; Or use CoreText’s C-based cross-platform API to draw text.

7 Text Rendering

All of the text content controls you can see on the screen, including UIWebView, are laid out in CoreText and rendered as bitmaps underneath. Common text control, such as UILabel, UITextView, its typesetting and drawing are carried out in the main thread, when the display of a large number of text, CPU pressure will be very large. The only solution is to customize the text control and asynchronously draw the text using TextKit or the low-level CoreText.

8 Picture decoding

When an image is created using the methods of UIImage or CGImageSource, the image data is not immediately decoded. Only when the image is set to UIImageView or Calayer.contents, and before the CALayer is submitted to the GPU, the data in the CGImage will be decoded and executed on the main thread.

The daemon first draws the image into CGBitmapContext and then creates the image directly from the Bitmap. At present, the common network photo library has this function.

9 Image Rendering

Drawing an image usually refers to the process of drawing an image onto a canvas using methods starting with CGxx, and then creating and displaying a picture from the canvas. The most common of these is the [UIView drawRect:] method. Since CoreCraphic methods are usually thread-safe, drawing images can easily be done on background threads.

GPU level

A GPU, in contrast to a CPU, takes submitted texture and vertex descriptions (triangles), applies transformations, mixes and renders, and outputs them to the screen. The main things you can see are textures (pictures) and shapes (vector shapes for triangle simulation). Therefore, we can start from the following three aspects.

1 Texture rendering

All bitmaps, including images, text, and rasterized content, are eventually committed from memory to video memory and bound to GPU textures. Both the process of committing to video memory and the process of GPU modulating and rendering textures consume a lot of GPU resources.

When displaying a large number of images in a short period of time (such as when UITableView has a large number of images and is swiping fast), the CPU usage is low and the GPU usage is high, thus causing the interface to drop frames and get stuck. The effective way to avoid this situation is to minimize the display of a large number of pictures in a short period of time, and to merge as many pictures as possible into one display.

2. Blending of views

When there are multiple views and multiple layers of overlapping display, the GPU will first mix them together. If the view structure is complex, the mixing process can also consume a lot of GPU resources. In order to reduce GPU consumption, the order of magnitude of view should be reduced as much as possible, and opaque attributes should be marked in the opaque view to avoid useless Alpha channel composition.

3. Graph generation

CALayer’s border, rounded corners, shadows and masks, CASharpLayer’s vector graphics display, usually trigger offscreen rendering, which usually happens on the GPU.

When a list view has a large number of calayers with rounded corners and slides quickly, it consumes a large amount of GPU resources, resulting in interface lag. To avoid this, try turning on the calayer.shouldrasterize attribute, which shifts the off-screen rendering onto the CPU. It is best to avoid rounded corners, shadows, masks, etc.

Note: There are two methods of GPU screen rendering:

On-screen Rendering (ON-screen Rendering)

Normal GPU rendering process, the GPU will render the completed frame into the frame buffer, and then display to the screen.

Off-screen Rendering (OFF-screen Rendering)

An additional off-screen rendering buffer is created (for example, to save data that is reused later), which is still later submitted to the frame buffer for display on screen.

Off-screen rendering needs to create a new buffer, and the rendering process involves multiple context environment switches from the current screen to the off-screen environment. After the off-screen rendering is completed, the rendering result needs to be switched to the current screen environment, so the cost is high.

The above is the whole process of UIView from drawing to rendering to display. As an ambitious iOS developer, maintaining smooth performance of iOS is always the pursuit.