Screen stuck/torn

Screen stutter/tear refers to some abnormal display phenomena when the screen displays graphics images.

  • Tear: is the graphic picture display dislocation
  • Drop frame: to repeatedly display the same frame of picture data

So why the display problem? Let’s talk about it in detail.

Before analyzing, let’s take a look at some CPU and GPU knowledge, as well as the evolution of screen imaging

The CPU and GPU

  • CPU: is the operation core and control core of the entire computer, processing complex logic and data, very high dependence, by time slice switch to achieve concurrency
  • GPU: A microprocessor used for drawing operations. It is a unit processor with high concurrency and very low dependency

The screen image

Computer display mode is through the beginning of the random scan display evolution to the raster scan display

The graphic image is composed of pixel array. The display of the image is independent of the complexity of the image. Raster scanning starts from the upper left corner, which is the time of displaying the entire raster.

The structure of simple scanning grating system is as follows:

Such as: To display a 60 * 60 bitmap, the required memory is 60 * 60 * 4 (RGBA) = 14400. In the early period without graphics card, the image must be decoded into the memory by CPU first, and then display the image that the controller reads from the main memory of the system through the system bus, which is slow to read. The common scanning grating system structure is evolved as follows:

The commonly used scanning grating system structure is to create a piece of memory in the main memory of the system as the frame cache to store the displayed data, but it is still read through the system bus, and the reading is still limited, and then evolved the following advanced scanning grating system structure

The image is decoded to the memory and copied to the frame cache area. The GPU directly operates the data in the frame cache area of video memory

With that in mind, start analyzing the screen tear problem.

Screen tearing

Raster scanning system is mainly divided into three parts:

  • Display (from frame cache),
  • The video controller (responsible for controlling the refresh components, the mapping between the frame cache and the display),
  • Frame cache (display content, frame cache/video memory mainly stores image refresh data)

The display displays the data in the frame cache area. In the raster scanning process, due to the visual transient of the human eye, when the refresh frequency exceeds 16 frames, the human eye will think it is coherent. The simple screen imaging process is as follows:

Cause of screen tear

  • In the process of displaying images on the screen, the data is continuously retrieved from the frame cache for display

  • Then during rendering, the old data is still in the frame cache, and the screen gets the old data to display

  • When the old data is not read, a new frame of data is processed and put into the cache, which will lead to the display of the other part of the screen is the obtained new data, which will lead to the mismatch of pictures, people, scenes and other mismatched display on the screen. The diagram below:

    Causes of tearing:

Solve the tear

Apple uses vSYNC + dual cache to solve the tear problem. Here’s a simple illustration:

  • Vsync: Lock the frame buffer so that when the beam is scanning, the next frame is read only after the scan is complete, rather than just a portion
  • Double buffer: Uses two frame buffers for storageGPUWhen the screen displays the contents of one buffer, the other buffer continues to wait for the next buffer, alternating between the two buffers

Here’s the official apple schematic:

Using vSYNC + double cache fundamentally solves the problem of screen tearing, but then comes another problem — screen stalling and frame dropping.

Tearing typically occurs when loading a high FPS video or game scene on low-end devices, which is not common on iOS devices, or when loading a very high frequency motion effect and layer complexity overlays motion effects on low-end iOS devices.

Screen caton

Frame drop: simply speaking, the screen repeatedly displays the same frame data. When receiving Vsync, the same frame data is repeatedly rendered because the image data of CPU and GPU are not processed yet.

In order to reduce frame loss (reduce rather than solve frame loss), three buffers were introduced to make full use of CPU/GPU to reduce the number of frames dropped.

Summary:

  • CPU/GPURendering pipeline takes too long -> drop frames
  • Vertical synchronizationVsync+ double cacheDoubleBufferingSolve the screen tear problem at the cost of dropping frames
  • Cache area: Reasonable useCPU/GPUReduce the number of frames

IOS Rendering process

The overall rendering process in iOS is as follows:

  • App callsCoreGraphics,CoreAnimation,CoreImageFrame interface triggers graphics rendering
  • Render toOpenGL ESorMetalTo driveGPUDo the rendering and finally display it on the screen
  • Due to theOpenGL ESIs cross-platform, so in the implementation, the respective platform forOpenGL ESProvide the carrier. iniOSMedium means passCoreAnimationProvide Windows to letAppYou can call it.

View and CALayer

  • View

    View is UIKit, it’s responsible for drawing and animating, for managing layout and subviews, for handling click events

  • CALayer

    It belongs to the CoreAnimation framework and is responsible for rendering, both for UIKit and APPKit

  • The relationship between the two

    UIVIew is based on UIKit framework and can handle touch events and manage subviews. CALayer is based on CoreAnimation, while CoreAnimation is based on QuartzCode and is only responsible for rendering, not user events.

    In terms of inheritance, CALayer inherits NSObject, UIView inherits directly from UIResponder, UIView belongs to UIKit components, and UIKit components end up being broken down into layers, stored in a layer tree, and when you need to interact with the user, you use UIView, When you don’t need to interact, you can

CoreAnimation

CoreAnimation: Render, Compose, and Animate Visual Elements, is essentially a composite engine that is responsible for rendering, construction, and animation implementation.

There are two frameworks built based on CoreAnimation in iOS, namely UIKit and APPKit, while the bottom layer of CoreAnimation is based on Metal and CoreGraphics

CoreAnimation’s rendering process

The process is as follows:

It can be divided into two parts:

  • Coreanimation part

    • APPTo deal withUIViewAnd so on through the eventCPUComplete the calculation of the display content, pack the calculated layer, next timerunloopWhen submitted to the rendering server
    • Ren der ServerDecodes the received content, and then executesOPenGLRelated procedures, callGPURendering,Render ServerOperation analysis is as follows:
  • GPU rendering part

    • GPUThrough the vertex shader, slice shader to complete the rendering of the display content, the result is stored in the frame cache
    • GPUThrough the frame cache, video controller and other related components, the content is displayed on the screen