Off-screen rendering and normal rendering

The following flow chart can describe the difference between off-screen rendering and normal renderingOff-screen rendering has one more off-screen buffer than normal.

Normal rendering

After the data in App is calculated by CPU and rendered by GPU, the result is stored in the frame buffer. The video controller then pulls it out of the frame buffer and displays it on the screen.

  • In the process of GPU rendering to the frame buffer, the “painter algorithm” is followed to output the frame buffer in order from far to near (bottom to top) for each layer. The last layer overlays the previous layer to get the final display.

Off-screen rendering

However, some scenarios are not so simple. The “painter” GPU can output layer by layer to the canvas, but can’t go back and erase/change a part of it after a layer has been rendered, because layers of layer pixel data prior to this layer have been permanently overwritten in the rendering. This means that for each layer, you either have to find an algorithm that can render in a single pass, or you have to open a separate memory and use this temporary transit area (the off-screen buffer) to do more complex, multiple modifications/clipping operations.

  • The size of the off-screen buffer is not infinite. It has an upper limit, up to 2.5 times the size of the screen.

CPU “Off-screen render”

You know, if we implement a drawRect method in UIView, even if there’s no actual code in the function body, the system will allocate a memory area for that view and wait for CoreGraphics to draw.

There are many articles and videos that are also referred to as “off-screen rendering” for “opening a CGContext to draw” operations (because pixel data is temporarily stored in the CGContext rather than directly into the frame buffer). Further, in fact, all raster operations (such as text rendering and image decoding) carried out by CPU cannot be directly drawn to the frame buffer controlled by GPU, but can only be temporarily placed in another memory, which is “off-screen rendering”.

Naturally, we assume that since the CPU is bad at doing this, we need to avoid it as much as possible, and mistakenly assume that this is why we need to avoid off-screen rendering. But according to Apple’s engineers, CPU rendering isn’t really off-screen rendering. Another piece of evidence is that if your view has drawRect, turn on Xcode’s “Color offscreen Rendered yellow” switch and you’ll find that region not marked yellow, indicating Xcode doesn’t consider it an off-screen rendering.

In fact, rendering through the CPU is commonly known as “software rendering”, and the real off-screen rendering occurs on the GPU.

Common off-screen rendering scene analysis

CornerRadius + clipsToBounds.

If you want to draw a container with rounded corners and cut out content beyond the rounded corners, an off-screen rendering may or may not trigger. The following code does not trigger an off-screen rendering:

UIButton *btn = [UIButton buttonWithType:UIButtonTypeCustom];
btn.frame = CGRectMake(100, 180, 100, 100);
btn.layer.cornerRadius = 50;
btn.backgroundColor = [UIColor blueColor];
[self.view addSubview:btn];
btn.clipsToBounds = YES;
Copy the code

However, setting an image or title to the BTN based on the code above will trigger an off-screen rendering. After the title is set for the BTN, a Label subview is added to the BTN in Xcode’s Debug View Hierarchy to bear the title. After setting the image for the BTN, add an ImageView to the BTN to hold the image image. It is not difficult to guess that after setting title or image, BTN has more than one layer, so after setting rounded corners, the processing of rounded corners cannot be completed through one traversal, so it is necessary to use the off-screen buffer. By inference, adding a subview to the BTN based on the code above will also cause an off-screen rendering.

shadow

The reason for this is that although the layer itself is a rectangular area, the shadow is applied to the “opaque area” of the layer by default and needs to be displayed underneath all the layer content, so it must be rendered first according to the painter’s algorithm. However, the paradox is that the ontology of the shadow (layer and its sub-layer) has not been combined yet. How can it be possible to draw a shape in the first step that can only be known after the last step? In this case, you have to apply for another block of memory, draw the body content first, add shadows according to the shape of the rendered result, and pass the processed content to the frame buffer. However, if we can tell the CoreAnimation (via the shadowPath property) the geometry of the shadow, then of course the shadow can be rendered independently first, without relying on the Layer body, so there is no need for off-screen rendering.

group opacity

As shown in the figure below, add UILabel to a UIView. The background color of both View and label is white, with an Alpha of 0.5 for View and 1.0 for Label. AllowsGroupOpacity =true on the left and false on the right. Found that the group opacity on the left triggered the off screen render, the right did not trigger the off screen render.

Principle: Alpha is not applied to each layer separately, but only after the entire layer tree is drawn, and then added to the alpha, and finally combined with the pixels of the other layers below. Obviously, you can’t get the final result in one iteration.

mask

We know that mask is applied on top of layer and any combination of its sub-layers, and may have opacity. In fact, the principle of group opacity is similar to that of group opacity, which has to be completed in an off-screen rendering.

rasterizer

When rasterization is enabled, the layer will be rendered into bitmaps and stored in the off-screen rendering cache, so that it can be reused directly in the next use to improve efficiency. For the use of rasterization, there are several suggestions:

  • If layer cannot be reused, there is no need to turn on rasterization;
  • If the layer is not static and needs to be modified frequently (e.g. during animation), turning on rasterization can actually affect efficiency.
  • Off-screen rendering cache content has a time limit. If it is not used within 100ms, it will be discarded and cannot be reused.
  • Off-screen rendering has limited cache space, which is 2.5 times the size of the screen. Exceeding 2.5 times the size of the screen pixels will also fail and cannot be reused.

UIBlurEffect

Again, you can’t do it in one traversal

Performance impact of GPU off-screen rendering

GPU operation is highly pipelined. All of the computational work was being methodically output to the frame Buffer, when suddenly we received an instruction that we needed to output to another piece of memory, and everything that was going on in the pipeline had to be discarded and switched to serving only our current “rounding” operation. Wait until you are done, then clear again, and return to the normal flow of output to the Frame Buffer. In tableView or collectionView, each frame change in the scroll triggers a redrawing of each cell, so once there is an off-screen rendering, the context switch mentioned above occurs 60 times per second, and there are probably dozens of images per frame that require it. You can imagine the performance impact on gpus (gpus are very good at massively parallel computing, but I think frequent context switching is clearly not in their design). So why do we use off-screen rendering when we know there are performance issues?

  • It can handle special effects that can’t be done all at once, requiring the use of off-screen buffers to hold intermediate states and having to use off-screen renderings, such as the often used rounded corners, shadows, Gaussian blur, rasterization, etc.
  • It can improve the efficiency of rendering. If an effect is realized several times, it can be rendered in advance and saved to the off-screen buffer for reuse. This situation needs to be triggered manually by the developer.

If you avoid off-screen rendering

  • Change resources directly to make the UI serve rounded images;
  • [UIBezierPath] Draw a rectangle with closed rounded corners using bezier curves, set only the interior visible in the context, then render the layer without rounded corners as a picture and add it to the Bezier rectangle. This method is more efficient, but once the layer layout is changed, bezier curves need to be manually redrawn, so it is necessary to manually listen and redraw the frame, color, etc.
  • The AsyncDisplayKit(Texture) serves as the main rendering framework, which handles the asynchronous rendering of text and images.
  • For the fillet of the picture, the “precomposite” strategy is adopted uniformly, that is, the CoreGraphics is used to fillet the picture in advance without cutting through the container.
  • For all shadows, use shadowPath to avoid off-screen rendering;
  • For special shaped views, use layer Mask and shouldRasterize to buffer the render result.

Reference documentation

  • An in-depth look at iOS off-screen rendering
  • IOS has a deep understanding of off-screen rendering
  • In-depth analysis of [off-screen rendering] principle