I. CPU and GPU

  • CPU is the computing core of modern computers, as well as the control core of computers. The CPU handles a lot of logically complex content, various data types, terminal commands, and so on.
  • GPU is a microprocessor that is responsible for graphics operation in a computer. It is the link between a computer and a display terminal. Gpus are not good at processing logically complex things, they are good at processing a single computation.

1.1 architecture of CPU and GPU

The CPU has the ALU compute unit, the Control Control unit, and the advanced cache to assist with the overall system turnaround.

GPU has many ALU computing units, through which a high-concurrency operation can be performed. Each cell is connected to an SP stream processor to control these calculations

Second, the computer display

2.1. Random scanning display

2.2 raster scanning display

The beam of electrons scans horizontally to the right from the top left of the screen, and again to the right from the second line at the right edge of the screen.

The image of a raster display system is composed of an array of pixels. The time of displaying an image by raster scanning is unrelated to the image. The time of each frame is fixed and has nothing to do with the complexity of the image. In the display process, the image is constantly refreshed, but the human eye does not see the image is still, which is caused by a reason of visual transient. People will think it is coherent when they see 16 frames per second at the minimum.

2.2.1 Composition of raster scanning display system

  • Monitor: used for interface display, that is, screen display. What is displayed is the image data in the frame cache
  • Video controller: Controls the refresh unit, responsible for the mapping between the frame buffer and the display, so that it can make a draw
  • Frame cache area: Stores color values, also known as video memory (what is displayed on the display).

Early displays had no video memory, so they used a buffer in the system.

At present, the computer can use the graphics memory of an independent graphics card to do the buffer, and process images through the GPU. The operation speed will be greatly improved, and is no longer limited by the system bus.

2.3. Screen tear

Rendering process: GPU rendering -> Frame cache -> Video controller -> Reading frame cache information (bitmap) -> DIGITal-to-analog conversion (digital signal to analog signal) -> display

For each image scanned, the image is constantly displayed, and at 60FPS you get a very smooth picture.

But, in the process of rendering, read the next buffer data, at this point in the frame buffer is not completely refresh the new image data buffer, buffer new buffer data is only half updates to, for example, the display image is part of the of the old buffer data, part for the new buffer data, can cause the following effects: screen torn apart

So Apple uses a workaround strategy: Vsync + double cache DoubleBuffering

  • Vsync: Lock the frame buffer so that the next frame is scanned only after the current scan is completed. This is mainly to prevent tearing

  • Double cache area DoubleBuffering: picture in the drawing time, the first buffer to read data for drawing, drawing will enter the second buffer to read data for drawing, after the end of the buffer rotation again. A way to get to the root of the tear.

2.4, frame drop

After enabling this solution strategy, a new problem will occur, that is, frame drop will occur: when receiving the vertical signal, the CPU and GPU have not prepared the image data, then the screen can not get the image data, this time will occur frame drop, will repeatedly render the repeated data of the current buffer.

A new strategy is used: the triple buffer. The principle is the same as the double buffer principle, but it will also drop frames, but the probability is much lower.

Frame drop cause:

  1. The CPU/GPU rendering pipeline takes too long
  2. Vsync + double cache DoubleBuffering resolves screen tearing at the expense of dropping frames
  3. Three-buffer: more reasonable use of CPU/GPU, reduce the number of frames.

OpenGL layer off-screen rendering

Take a look at the rendering process of our APP:

OpenGL ES is what drives the GPU to do the display for you. The official image framework that we’re using, is a wrapper around OpenGL ES, providing us with a carrier to help us use OpenGL ES.

CoreAnimation is essentially a composite engine that renders, builds, and implements animations.

3.1 Relationship between View and Layer

Regardless of iOS development or Mac development, the core animation is based on a layer of APPKit/UIKit to do processing. So the View is not going to do that.

UIView:

  1. Drawing and animation carrier
  2. Layout, manage subviews
  3. Click event handling

CALyer:

  1. Rendering and animation
  2. APPKit/UIKit both render, so it doesn’t cater to any layout

In order to separate responsibilities, and the system exchange rules are different for different platforms.

3.2. Rendering process

The loading of any carrier (e.g. UIButton, UIImage) comes to the CoreAnimation, first processes its events (clicking, changing position, etc.), and then commits a Commit Transaction. Submit the image to Render Server(Core Animation) for decoding and wait for the next Runloop for the next callback. Once processed by CoreAnimation, it will be submitted to OpenGL, which schedules the GPU to render (vertex data -> vertex shader -> slice shader -> Wait for the next runloop-> display)

3.2.1 Render Server operation analysis

CoreAnimation submits the data to OpenGL. After receiving the Buffer data, it first sets the vertex shader, then the slice shader, and finally the Render Buffer