Screen caton

Screen stutter refers to a kind of abnormal phenomenon that users can see intuitively from the screen due to the tearing (that is, the image is displayed incorrectly) and frame drop (the same frame data is repeatedly displayed) in the display of graphics and images

  • Why does this happen? Here is a detailed explanation of the screen stuck

【 答 案 】 Why the screen is stuck

There are three main reasons

  • CPU and GPU spend too much time in the rendering pipeline. As a result, when the bitmap display is obtained from the cache, the data of the next frame is not ready, and the data of the last frame is still obtained, resulting in the phenomenon of frame drop, which leads to screen lag
  • Apple has officially been using a solution for screen tearingVsync + double cache, can fundamentally prevent and solve the screen tear, but also lead to new problemsFrame drop. Although we adopt double cache area, we cannot solve the problem of the speed of CPU and GPU in processing graphics and images. As a result, when the screen receives vertical signals, the data is not ready, and the cache area is still the data of the last frame, resulting in frame loss
  • In the vSYNC + double cache scheme, we optimize it again and change the double cache toThree bufferIn fact, this can not fundamentally solve the problem of frame loss, but the probability of frame loss is much smaller than that of double cache. There is still the possibility of frame loss, which may be unconscious to users.

Next, a detailed analysis of screen tear and frame drop problems

Screen tearing

As you can see, screen tearing is something like this

Before we talk about screen tearing, how does the screen image, what’s the main process

Screen imaging process

Take a look at the diagram below, which illustrates a screen imaging process in detail

  • Will need to display the image, via GPU rendering

  • The rendered result is stored in the frame cache. The format stored in the frame cache is a bitmap

  • The video controller reads the bitmap from the frame cache and delivers it to the display, which scans the line from the upper left for display

Cause of screen tear

  • In the process of displaying graphic images on the screen, frame by frame data is continuously obtained from the frame cache for display.
  • And then during rendering, the frame cache still has the old data, and the screen takes the old data and displays it,
  • When the old data is not read, a new frame of data is processed and put into the cache, which will lead to the display of the other part of the screen is the obtained new data, which will lead to the mismatch of pictures, people, scenes and other mismatched display on the screen. The illustration is as follows:

Apple’s official solution to screen tearing is vSYNC + dual caching, which mandates synchronization at the expense of frame loss.

The following is a diagram of the vSYNC + dual cache process

  • Vsync: Lock the frame buffer so that when the beam is scanning, the next frame is read only after the scan is complete, rather than just a portion

  • Double buffer: Use two frame buffers to store the results of GPU processing. When the screen displays the contents of one buffer, the other buffer continues to wait for the next buffer, and the two buffers alternate in turn

Frame drop

With Apple’s double buffer scheme, a new problem occurs, dropping frames. What is frame drop? Simply put, frames drop when the screen repeatedly displays the same frame of data

As shown in the figure, the current screen displays A. After receiving the vertical signal, B processed by CPU and GPU is not ready. At this point, the screen still displays A

  • In view of frame drop, we can optimize on the basis of Apple scheme, namely adoptThree buffer, means that three data are prepared for display when the screen is displayed.

Rendering in iOS

The overall flow of rendering in iOS is shown below

  • App invokes the interfaces of CoreGraphics, CoreAnimation, CoreImage and other frameworks to trigger the graphics rendering operation
  • Frames such as CoreGraphics, CoreAnimation and CoreImage will render by OpenGL ES, and OpenGL ES will drive GPU to render and finally display it on the screen
  • Since OpenGL ES is cross-platform, there is no window code in its implementation. Instead, the respective platforms provide the carrier for OpenGL ES. In ios, if you need to use OpenGL ES, you can use CoreAnimation to provide Windows for App to call.

Summary of rendering frameworks in iOS

It mainly consists of the following six frameworks, which have been explained in the table and will not be explained in detail

Relationship between View and CALayer

So let’s just talk a little bit about UIView and CALayer

UIView

  • UIView belongs to UIKIt
  • Responsible for drawing graphics and animation operations
  • Used for interface layout and subview management
  • Handles user click events

CALayer

  • CALayer belongs to CoreAnimation
  • Only responsible for display, and display is a bitmap
  • CALayer is used for both UIKit and APPKit, ==> UIKit is the rendering framework of iOS platform, APPKit is the rendering framework of Mac OSX system, ==> Because the interface layout of iOS and Mac is not consistent, iOS is based on multi-touch interaction mode, Mac OSX is based on the interaction mode of mouse and keyboard, and the layout operation is done in the corresponding framework respectively, so there is no need for layer carrier to layout, and there is no need to cater to any layout mode.

【 interview question 】UIView and CALayer

  • UIView is based on the UIKit framework and can handle user touch events and manage subviews
  • CALayer is based on CoreAnimation, which is based on QuartzCode. So CALayer only handles the display, not the user’s touch events
  • From the parent class, CALayer inherits NSObject, UIView inherits directly from UIResponder, so UIView just has more event handling than CALayer,
  • At the bottom, UIView belongs to the components of UIKit, and the components of UIKit are eventually decomposed into layers and stored in the layer tree
  • At the application level, when you need to interact with the user, use UIView, and when you don’t need to interact, use both

【 interview question 】 Why UIView and CALayer are designed separately

  • Mechanism and policy are separated

Once implemented, mechanisms rarely change, but policies are often optimized. CALayer can also be viewed as a mechanism that provides layer drawing, and if you look at CALayer’s header file, it’s basically unchanged, and UIView can be viewed as a strategy, it’s changed a lot. The lower you go, the more mechanism you have, the more stable the mechanism is. The separation of mechanism and policy results in less code that needs to be modified, especially the underlying code, which improves system stability.

  • More immutability, more versatility

For example, NSView(Mac) and UIView(iOS) have different layouts, but you can reuse the same rendering logic. Stability is immutable. The more immutable things a system is, the more stable it is. So the mechanism satisfies this immutable factor. One of the guiding ideas in building a system is to try to extract immutable things and separate mutable things. Water doesn’t make tall buildings, solid concrete does. Less modification means less chance of bugs.

  • Do their job

Working with each other to minimize controllable granularity can also make the system more stable and modifiable.

  • Leakage of less

Interfaces should be open to the public, according to the 82 principle, in fact, 20% of the interface can meet 80% of the needs, the remaining 80% should be hidden behind. Because it’s always safe to leak less, right? The remaining 80% of expert interfaces can be hidden and deep. UIView, for example, overshadowing most of the CALayer interface, abstracts and constructs more usable frame and animation implementations that make it easier to get started.

UIView and CALayer rendering

How do you render between view and layer

  • Interfaces can trigger in two ways

    This is triggered by the drawRect method of the loadView subview: this calls back to the BeforeWaiting of the CoreAnimation listening for RunloopRunloopObserverThe RunloopObserver is used to further call theCA::Transaction::commit()And step by step todrawRectmethods

    ==> User click event triggers: wake up Runloop, handled by source1(__IOHIDEventSystemClientQueueCallback)And is forwarded by source0 in the next runloopUIApplication(_UIApplicationHandleEventQueue)To call the event queue in Source0 from within CoreAnimationCA::Transaction::commit(); Method, and then call it step by stepdrawRect.

    They’re all going to end up in the CoreAnimationCA::Transaction::commit()Method to trigger UIView and CALayer rendering
  • At this point, you are inside the CoreAnimation, which is calledCA::Transaction::commit(); To create a CATrasaction and call it furtherCALayer drawInContext:()
  • Call CALayer’s Delegate(UIView) and ask UIView that there is nothing to draw, i.e. call back todrawRect:methods
  • DrawRect: methods can be drawn using the CoreGraphics function or UIKit
  • The drawn bitmap is handed over to CALayer and transmitted to GPU frame buffer by OpenGL ES
  • After the screen receives the vertical signal, it reads the data from the frame buffer and displays it on the screen

CoreAnimation

Animations in Render, Compose, Animate Visual Elements, and CoreAnimationg are just a part of apple’s official description. It is actually a composite engine, mainly responsible for rendering, construction, and animation implementation.

CoreAnimation is shown in ios

  • There are two frameworks built on CoreAnimation in ios: UIKit and APPKit

  • CoreAnimation is packaged based on Metal and CoreGraphics

Why does Apple provide two parallel hierarchies (UIKit and APPKit) based on UIView and CALayer?

  • Separation of duties, can avoid a lot of duplicate code
  • The interaction rules of the two systems are inconsistentAlthough similar in function, there are significant differences in implementation

The rendering pipeline in CoreAnimation

The rendering flow in CoreAnimation is shown in figure 1

It is mainly divided into two parts:

  • CoreAnimation part
  • GPU part

CoreAnimation part

  • App processes the events of UIView, UIButton and other carriers, and then completes the calculation of the display content through CPU, and packages the calculated layers, which will be sent to the rendering server in the next runloop
  • Render Server mainly decodes the received content for display, and then executes OpenGL and other related programs, and calls GPU for rendering ==> Render Server operation analysis

GPU part

  • The rendering of display content is completed by vertex shader and slice shader in GPU, and the result is stored in the frame cache area
  • The GPU displays the frame on the screen through related components such as the frame cache and video controller