First, display display principle:
Take the CRT display principle in the past as an example: CRT display relies on electron beam to excite the phosphor on the inner surface of the screen to display the image, because the phosphor will be extinguished soon after being lit, so the electron gun must constantly excite these points in a cycle. CRT electron gun scans from top to bottom line in the above way. After scanning, the display displays a frame, and then the electron gun returns to the initial position to continue the next scan. To synchronize the display with the system’s video controller, the display (or other hardware) generates a series of timing signals using a hardware clock. When the gun moves to a new line and is ready to scan, the monitor emits a horizontal synchronization signal, or HSync; When a frame is drawn and the gun returns to its original position, the display sends a vertical synchronization signal, or VSync, before it is ready to draw the next frame. The monitor usually refreshes at a fixed rate, which is the frequency at which the VSync signal is generated. Although the current LCD pixel display and RGB arrangement is not the same, but the refresh method is also based on the above principle.
Ii. Principle of image display in iOS
1. Image loading process
IOS loads an image from disk and displays it on the screen using UIImageView. The loading process is as follows:
- We use the
+imageWithContentsOfFile:
(Created using Image I/OCGImageRef
The memory mapped data) method loads an image from disk. At this point, the image has not been decoded. In this process, data is copied from disk to kernel buffer, and then from kernel buffer to user space. - generate
UIImageView
, assign the image data toUIImageView
, if the image data is not decoded (PNG/JPG), decoded to bitmap data. What is a bitmap? Don’t worry, read on) - implicit
CATransaction
Captures changes to the UIImageView Layer tree. - When the next runloop of the main thread arrives,
Core Animation
Commits this implicittransaction
This process may be carried out on the picturecopy
Operation, while affected by factors such as whether the image is byte aligned, thiscopy
The operation may involve some or all of the following steps:
- Allocate memory buffers to manage file IO and decompression operations;
- Read file data from disk to memory;
- Decoding compressed image data into uncompressed bitmap form is a very time-consuming CPU operation;
- then
Core Animation
中CALayer
Render using uncompressed bitmap dataUIImageView
The layer. - Finally the CPU calculates the picture
Frame
After decompressing the image. I’ll hand it over to the GPU to do the image rendering.
- Rendering process
- GPU gets the coordinates of the image
- Give coordinates to vertex shaders (vertex computations)
- Rasterize the image (get the pixels on the screen corresponding to the image)
- Chip shader calculation (calculates the final display color value for each pixel)
- Render from the frame cache to the screen
In the simplest case, there is only one frame buffer, and reading and flushing of the frame buffer can be inefficient. To solve the efficiency problem, the display system usually introduces two buffers, that is, double buffering mechanism. In this case, the GPU will pre-render one frame into a buffer for the video controller to read, and after the next frame is rendered, the GPU will point the video controller’s pointer directly to the second buffer. So there’s a huge increase in efficiency.
Double buffering solves the efficiency problem, but it introduces a new one. When the video controller has not finished reading, that is, when the screen content is just half displayed, THE GPU submits a new frame content to the frame buffer and exchanges the two buffers, the video controller will display the lower part of the new frame data on the screen, causing the phenomenon of picture tearing.
To solve this problem, gpus usually have a mechanism called VSync (also known as V-sync). When VSync is enabled, the GPU will wait for a VSync signal from the display before performing a new frame rendering and buffer update. This will solve the problem of tearing and increase the smoothness of the picture, but it will consume more computing resources and cause some latency.
From the above we can also see that usually the computer in the display is CPU and GPU cooperation to complete a rendering. What is the specific division of labor of CPU/GPU in such a rendering process?
- CPU: computing view
frame
, image decoding, the need to draw texture images through the data bus to the GPU - GPU: Texture blending, vertex transformation and calculation, pixel filling calculation, rendering to frame buffer
- Clock signal: vSYNC signal
V-Sync
/ Horizontal sync signalH-Sync
- Double buffering on iOS devices: Display systems typically introduce two frame buffers. Double buffering (as we know, iOS devices always use double buffering and enable vSYNC. On Android devices, Google didn’t introduce this mechanism until version 4.1. Currently, Android has triple cache + vSYNC.)
- The display of images on the screen is a collaboration between the CPU and GPU
Images are the biggest memory hog for apps, and loading an image from disk to the screen is a complex process.
Three, image decompression
We mentioned above that image decompression is a very time consuming CPU operation, and it is performed in the main thread by default. This can have a serious impact on the responsiveness of our application when there are a lot of images to load, especially on fast scrolling lists. Let’s take a look at the decompression process. The essence of an image is a bunch of pixels, and the bitmap is actually an array of pixels. PNG or JPG images, which are often used in our daily development, are compressed bitmap graphics formats. However, PNG images are lossless compressed and support alpha channels. JPEG images are lossy and can specify a compression ratio of 0-100%. Apple provides the following functions to generate PNG and JPEG images:
// return image as PNG. May return nil if image has no CGImageRef or invalid bitmap format UIKIT_EXTERN NSData * __nullable UIImagePNGRepresentation(UIImage * __nonnull image); // return image as JPEG. May return nil if image has no CGImageRef or invalid bitmap format. compression is 0(most).. 1(least) UIKIT_EXTERN NSData * __nullable UIImageJPEGRepresentation(UIImage * __nonnull image, CGFloat compressionQuality);Copy the code
From the above we can see that the higher the resolution of the image, the higher the number of pixels it contains, and the longer the process of decompressing the image into a bitmap takes. We have to go through this process of decompression, because the process of image display is actually to display the pixels that make up the image, thus making up our image. Therefore, before rendering an image on disk to the screen, the raw pixel data (bitmap) of the image must be obtained before subsequent drawing operations can be performed. Since image decompression is inevitable, and we don’t want it to be executed on the main thread, affecting the responsiveness of our application, is there a better solution? Of course there are, as you probably know, decompression in the main thread affects performance, so why not put it in the child thread?
As we mentioned earlier, when an uncompressed image is about to be rendered to the screen, the system unzips the image in the main thread, and does not unzips the image if it has already been unzipped. As a result, the industry’s solution is to force the image decompression ahead of time in the child thread.
The principle of forced decompression is to redraw the image and get a new bitmap after decompression. The core function used is CGBitmapContextCreate:
CG_EXTERN CGContextRef __nullable CGBitmapContextCreate(void * __nullable data,
size_t width, size_t height, size_t bitsPerComponent, size_t bytesPerRow,
CGColorSpaceRef cg_nullable space, uint32_t bitmapInfo)
CG_AVAILABLE_STARTING(__MAC_10_0, __IPHONE_2_0);
Copy the code
Parameter description of this function:
- data: If not
NULL
, then it should point to a piece of size of at leastbytesPerRow * height
Bytes of memory; If it isNULL
, the system will automatically allocate and free the required memory for us, so generally specifiedNULL
Can; - Width and height: the width and height of the bitmap, which are assigned to the pixel width and pixel height of the image respectively.
- BitsPerComponent: The number of bits used for each color component of a pixel. Specify 8 in RGB color space.
- bytesPerRow: The number of bytes used in each line of the bitmap. The size must be at least
width * bytes per pixel
Bytes. When we specify0/NULL
“, the system will not only automatically calculate for us, but also docache line alignment
The optimization of the - Space: is the color space we mentioned before, generally use RGB;
- bitmapInfo: Bitmap layout information.
kCGImageAlphaPremultipliedFirst
Commonly used picture tripartite library YYImage and SDWebImage, have the use of this function, interested in children’s shoes can be explored in the source code. If interested in the performance of the commonly used three party libraries difference, can look at the pictures of these common tripartite library performance contrast blog: blog.csdn.net/cool720/art… www.jianshu.com/p/aae77d7c1…
Conclusion:
-
Image files are uncompressed by CPU alignment only when they are confirmed to be displayed. Because decompression is a very performance intensive thing. Unzipped images will not be repeatedly unzipped, will be cached.
-
Image rendering to screen:
A. CPU reads the file -> calculates the Frame-> Image decoding -> Bitmap data of the decoded texture image is delivered to GPU through the data bus B. GPU get image Frame-> Vertex transformation calculation -> Rasterization -> get color value of each pixel according to texture coordinates (color * transparency value of each pixel if transparent value occurs)-> Render to Frame cache -> Render to screen
Reference: github.com/SDWebImage/… Github.com/ibireme/YYI… www.jianshu.com/p/72dd07472… www.w3cschool.cn/stiahw/fb7j…
Remember to like it if you think it’s good! I heard that the people who read the praise will meet the test, meet the prize will be in. ღ(´ · ᴗ · ‘)