Work in SurfaceFlinger synthesis

The Path of Android Meditation

preface

After SurfaceFlinger has gone through the process of composition, it’s time to do the actual composition. The entry code for composition is doComposition. Let’s see what SurfaceFlinger does in composition

A doComposition

The first synthesized methods are called in the doComposition function, which is called in total

  1. doDisplayComposition
  2. display->getRenderSurface()->flip()
  3. postFramebuffer
void SurfaceFlinger::doComposition(const sp<DisplayDevice>& displayDevice, bool repaintEverything) {

    // These objects were introduced before composition and are described in detail in Appendix 1
    auto display = displayDevice->getCompositionDisplay(a);const auto& displayState = display->getState(a);// Check whether OutputCompositionState is true. If OutputCompositionState is not true,
    The DisplayDevice variable is set to true when it is created and false only when power is turned off
    if (displayState.isEnabled) {
        // Convert dirty areas to screen coordinates
        const Region dirtyRegion = display->getDirtyRegion(repaintEverything);

        // Redraw the frame buffer if necessary
        doDisplayComposition(displayDevice, dirtyRegion);
	// Clean up dirty areas
        display->editState().dirtyRegion.clear(a);/ / notify the Surface
        display->getRenderSurface() - >flip(a); }// Generate data to the corresponding device
    postFramebuffer(displayDevice);
}

Copy the code

Two doDisplayComposition

DoDisplayComposition first determines whether a composite display is needed, and only in two cases, you can skip composition in all other cases

  1. Dirty areas are not empty
  2. Processed by Hardware Composer (later referred to as HWC)
void SurfaceFlinger::doDisplayComposition(const sp<DisplayDevice>& displayDevice,
                                          const Region& inDirtyRegion) {
    auto display = displayDevice->getCompositionDisplay(a);// Two cases that require actual compositing display
    // 1) handled by HWC, which may need compositing to keep its virtual display state in sync
    // 2) The dirty area is not empty
    // displayDevice->getId() gets the HWC Id. If it is not empty, HWC processing is required
    // That is, the HWC returns only when the HWC Id is empty and the dirty area is empty
    if(! displayDevice->getId() && inDirtyRegion.isEmpty()) {
        return;
    }

    // Define a Fence ready for composition
    base::unique_fd readyFence;
    // Start the composition and pass the Fence in
    if (!doComposeSurfaces(displayDevice, Region::INVALID_REGION, &readyFence)) return;
    // Swap the buffer, which also carries the Fence object created earlier
    display->getRenderSurface() - >queueBuffer(std::move(readyFence));
}
Copy the code

The function doDisplayComposition does two things

  1. Call doComposeSurfaces to do the compositing.
  2. Call queueBuffer to queue the finished buffer.

In the process it also passes a Fence, which is used to ensure the buffer synchronization mechanism.

Three doComposeSurfaces

DoComposeSurfaces, this function is long and mainly divided into the following parts. Let’s view it in pieces

3.1 First part of doComposeSurfaces

The first part will get some pre-processed parameters, which are fairly straightforward, but the only thing we need to notice is that we get a hasClientComposition variable here to distinguish whether it’s client-side processing

From this step, SurfaceFlinger will perform different composition logic according to different layer processing types, mainly divided into client processing and hardware processing, where the client processing is OpenGl processing (software processing), and hardware processing is HWC processing.

bool SurfaceFlinger::doComposeSurfaces(const sp<DisplayDevice>& displayDevice,
                                       const Region& debugRegion, base::unique_fd* readyFence) {

    auto display = displayDevice->getCompositionDisplay(a);const auto& displayState = display->getState(a);const auto displayId = display->getId(a);// getRenderEngine gets SurfaceFlinger in init, setRenderEngine creates GLESRenderEngine
    // It provides some OpenGL methods
    auto& renderEngine = getRenderEngine(a);// Whether the content is protected or not is related to copyright protection
    const bool supportProtectedContent = renderEngine.supportsProtectedContent(a);const Region bounds(displayState.bounds);
    const DisplayRenderArea renderArea(displayDevice);

    // Whether it contains layers handled by the GLES, which are handled by the client, and whether it is handled by the system hardware
    const bool hasClientComposition = getHwComposer().hasClientComposition(displayId);

    bool applyColorMatrix = false;

    renderengine::DisplaySettings clientCompositionDisplay;
    std::vector<renderengine::LayerSettings> clientCompositionLayers;
    sp<GraphicBuffer> buf;
    base::unique_fd fd;

    if (hasClientComposition) {
        // If client processing is required, execute the following logic
        if (displayDevice->isPrimary() && supportProtectedContent) {
            // If this layer is a protected layer, the protected context needs to be marked
            // Digital copyright related, do not need to pay attention to here, detailed can study Android DRM technology
            bool needsProtected = false;
            for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
                if (layer->isProtected()) {
                    needsProtected = true;
                    break; }}if(needsProtected ! = renderEngine.isProtected()) {
                renderEngine.useProtectedContext(needsProtected);
            }
            if(needsProtected ! = display->getRenderSurface() - >isProtected() &&
                needsProtected == renderEngine.isProtected()) {
                display->getRenderSurface() - >setProtected(needsProtected); }}// 1. Call RenderSurface's dequeueBuffer
        buf = display->getRenderSurface() - >dequeueBuffer(&fd); .Copy the code

The first part is the usual argument processing, but it’s worth noting that the layer composition starts from here and is handled by the client (OpenGl) or by the hardware HWC, and then calls the RenderSurface’s dequeueBuffer to fetch the buffer. This will pass in a file descriptor, which is the Fence mechanism used in SurfaceFlinger for synchronization.

Some of the objects retrieved from displayDevice in the first part are the objects of the encapsulated displayDevice, which can be described in appendix 1

The last step is to get a buffer from the buffer queue

3.2 Part ii of doComposeSurfaces

The first half of this code is all about setting some parameters, and then doing different things according to the composition method

bool SurfaceFlinger::doComposeSurfaces(const sp<DisplayDevice>& displayDevice,
                                       const Region& debugRegion, base::unique_fd* readyFence) {...// dequeueBuffer retrieves a buffer from the buffer queue
        buf = display->getRenderSurface() - >dequeueBuffer(&fd);
	// Part 2 begins
        // The first step is to check whether the received buffer is empty
        if (buf == nullptr) {
            return false;
        }

        clientCompositionDisplay.physicalDisplay = displayState.scissor;
        clientCompositionDisplay.clip = displayState.scissor;
        const ui::Transform& displayTransform = displayState.transform;
        clientCompositionDisplay.globalTransform = displayTransform.asMatrix4(a); clientCompositionDisplay.orientation = displayState.orientation;// DisplayColorProfile encapsulates all the states and functions related to how to convert colors for the display
        const auto* profile = display->getDisplayColorProfile(a); Dataspace outputDataspace = Dataspace::UNKNOWN;if (profile->hasWideColorGamut()) {
            outputDataspace = displayState.dataspace;
        }
        clientCompositionDisplay.outputDataspace = outputDataspace;
        clientCompositionDisplay.maxLuminance =
                profile->getHdrCapabilities().getDesiredMaxLuminance(a);const bool hasDeviceComposition = getHwComposer().hasDeviceComposition(displayId);
        const bool skipClientColorTransform = getHwComposer().hasDisplayCapability(displayId,
            HWC2::DisplayCapability::SkipClientColorTransform);

        // Compute the color change matrixapplyColorMatrix = ! hasDeviceComposition && ! skipClientColorTransform;if(applyColorMatrix) { clientCompositionDisplay.colorTransform = displayState.colorTransformMat; }}/* * Then render the layer for the frame buffer */
    bool firstLayer = true;
    Region clearRegion = Region::INVALID_REGION;
    for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
        const Region viewportRegion(displayState.viewport);
        const Region clip(viewportRegion.intersect(layer->visibleRegion));

        if(! clip.isEmpty()) {
            // Set the parameters to Layer Layer by calling prepareClientLayer, depending on the composition type.
            switch (layer->getCompositionType(displayDevice)) {
                case Hwc2::IComposerClient::Composition::CURSOR:
                case Hwc2::IComposerClient::Composition::DEVICE:
                case Hwc2::IComposerClient::Composition::SIDEBAND:
                case Hwc2::IComposerClient::Composition::SOLID_COLOR: {

                    const Layer::State& state(layer->getDrawingState());
                    if (layer->getClearClientTarget(displayDevice) && ! firstLayer && layer->isOpaque(state) && (layer->getAlpha() = =1.0 f) &&
                        layer->getRoundedCornerState().radius == 0.0 f && hasClientComposition) {
                        // Never clear the first layer, as you can be sure FB has been cleared
                        renderengine::LayerSettings layerSettings;
                        Region dummyRegion;
                        // 2. 调用 Layer 的 prepareClientLayer
                        bool prepared =
                                layer->prepareClientLayer(renderArea, clip, dummyRegion,
                                                          supportProtectedContent, layerSettings);

                        if (prepared) {
                            layerSettings.source.buffer.buffer = nullptr;
                            layerSettings.source.solidColor = half3(0.0.0.0.0.0);
                            layerSettings.alpha = half(0.0);
                            layerSettings.disableBlending = true;
                            clientCompositionLayers.push_back(layerSettings); }}break;
                }
                case Hwc2::IComposerClient::Composition::CLIENT: {
                    renderengine::LayerSettings layerSettings;
                    // 2. 调用 Layer 的 prepareClientLayer
                    bool prepared =
                            layer->prepareClientLayer(renderArea, clip, clearRegion,
                                                      supportProtectedContent, layerSettings);
                    if (prepared) {
                        clientCompositionLayers.push_back(layerSettings);
                    }
                    break;
                }
                default:
                    break; }}else {
            ALOGV(" Skipping for empty clip");
        }
        firstLayer = false; }... }Copy the code

The second part mainly calls the prepareClientLayer of the Layer. Depending on the type, it will prepare different parameters to pass to the Layer. This is to set some parameters for the Layer before synthesis.

Part 3 of doComposeSurfaces

The key point of the third part is

bool SurfaceFlinger::doComposeSurfaces(const sp<DisplayDevice>& displayDevice,
                                       const Region& debugRegion, base::unique_fd* readyFence) {
    // Starting with Part 3, perform some cleanup steps if the client combination is used
    if (hasClientComposition) {
        clientCompositionDisplay.clearRegion = clearRegion;

        // First increase the GPU frequency to handle color space conversion, then reset the GPU frequency to save battery
        const bool expensiveRenderingExpected =
                clientCompositionDisplay.outputDataspace == Dataspace::DISPLAY_P3;
        if (expensiveRenderingExpected && displayId) {
            mPowerAdvisor.setExpensiveRenderingExpected(*displayId, true);
        }
        if(! debugRegion.isEmpty()) {... }// 3. For client compositing, render layers for specific display using GPU compositing
      	RenderEngine is the GLESRenderEngine, which encapsulates some OpenGL methods
        renderEngine.drawLayers(clientCompositionDisplay, clientCompositionLayers,
                                buf->getNativeBuffer(), /*useFramebufferCache=*/true, std::move(fd),
                                readyFence);
    } else if (displayId) {
        // If it is not client-side synthesis, reset the GPU frequency, because using a high frequency GPU all the time consumes power
        mPowerAdvisor.setExpensiveRenderingExpected(*displayId, false);
    }
    return true;
}
Copy the code

3.4 Summary of doComposeSurfaces

Now that we’ve looked at the three parts of doComposeSurfaces, here’s a little summary

  1. After processing some parameters, the Buffer is allocated by calling the dequeueBuffer of the RenderSurface
  2. Process the parameters of client composition and calculate the color transformation matrix
  3. To render the Layer of the frame buffer, first fill the Layer of the frame buffer, Layer prepareClientLayer, and set some parameters
  4. The drawLayers of GLESRenderEngine are called directly to the render Layer if the client composition is used

Supplementary notes:

Buffer allocation, in the synthesis of SurfaceFlinger, involves a producer and consumer model, and this model runs through the whole process.

As for Layer composition and some processing operations, specific operations are different for different layers, so refer to [Layer Details] for specific operations.

RenderEngine is a GLESRenderEngine object created when SurfaceFlinger initializes, so it calls GLESRenderEngine’s drawLayers function, which is also done through OpenGL.

Brief introduction of producers and consumers

Having stated previously that in the composition of buffer allocation, there is a producer and a consumer, here we can briefly describe how this model works, as shown in the figure above. The detailed process is as follows

  1. The producer requests a buffer from the BufferQueue
  2. The BufferQueue takes a buffer from the bottom, and it pushes the buffer out of the queue and gives it to the producer
  3. After the producer writes to the buffer, the buffer is enqueued, and then the buffer is in the BufferQueue, waiting for the consumer to read it
  4. After the consumer has read the buffer, it goes back to step 1

The current queueBuffer is essentially step 3, where the producer enqueues the buffer and then the buffer waits in the BufferQueue for the consumer to read

There is as much logic involved in enqueuing a buffer as in enqueuing a buffer. See SurfaceFlinger’s producer and consumer

Five postFramebuffer

After the buffer is enqueued, there is also a clean up dirty data operation and a flip function, but the logic of these two functions is relatively simple and will not be expanded here. Now look at the postFramebuffer function


void SurfaceFlinger::postFramebuffer(const sp<DisplayDevice>& displayDevice) {

    auto display = displayDevice->getCompositionDisplay(a);const auto& displayState = display->getState(a);const auto displayId = display->getId(a);if (displayState.isEnabled) {
        if (displayId) {
            / / Fence synchronization
            getHwComposer().presentAndGetReleaseFences(*displayId);
        }
      	/ / DisplaySurface onFrameCommitted
        display->getRenderSurface() - >onPresentDisplayCompleted(a);for (auto& layer : display->getOutputLayersOrderedByZ()) {
            // Walk through the layer on the z-axis
            sp<Fence> releaseFence = Fence::NO_FENCE;
            bool usedClientComposition = true;

            HWC releases the layer buffer from the previous frame (if any) only when a release Fence signal from that frame (if any) is emitted.
            // Always get the release fence from HWC first.
            if (layer->getState().hwc) {
                const auto& hwcState = *layer->getState().hwc;
                releaseFence =
                        getHwComposer().getLayerReleaseFence(*displayId, hwcState.hwcLayer.get());
                usedClientComposition =
                        hwcState.hwcCompositionType == Hwc2::IComposerClient::Composition::CLIENT;
            }

            // If the previous frame used software composition, this frame needs to be merged with the previous frame
            if (usedClientComposition) {
                releaseFence =
                        Fence::merge("LayerRelease", releaseFence,
                                     display->getRenderSurface() - >getClientTargetAcquireFence());
            }
            // Grab the layer and call onLayerDisplayed to send the Fence signal to synchronize
            layer->getLayerFE().onLayerDisplayed(releaseFence);
        }

        // We have a list of layers that need a fence,
        // So the best we can do is to provide them with the current fence
        if(! displayDevice->getLayersNeedingFences().isEmpty()) {
            // If the queue that requires a Fence signal on this layer is not empty, the Fence signal is passed after the composition
            sp<Fence> presentFence =
                    displayId ? getHwComposer().getPresentFence(*displayId) : Fence::NO_FENCE;
            for (auto& layer : displayDevice->getLayersNeedingFences()) {
                layer->getCompositionLayer() - >getLayerFE() - >onLayerDisplayed(presentFence); }}if (displayId) {
            // Then clear the HWC Fence signal
            getHwComposer().clearReleaseFences(*displayId); }}}void RenderSurface::onPresentDisplayCompleted(a) {
    // Call DisplaySurface's onframecomadmitted
    mDisplaySurface->onFrameCommitted(a); }Copy the code

A postFramebuffer basically passes some Fence signals and then commits data.

It is important to note that during SurfaceFlinger’s work, the application provides its own screen to The SurfaceFlinger, and the SurfaceFlinger then submits the buffer containing the data to the hardware. However, when the buffer is submitted to the hardware, it does not mean that the data is ready to be used. In this case, a Fence signal will be submitted along with the data, and you need to wait for this signal when it is ready to be used. So there’s a bunch of signaling going on here.

Six summarize

So that’s a brief introduction to the operations in composition, but a brief summary

  1. There are two main functions in the composition
    1. DoDisplayComposition: Repainting part of the buffer that needs to be repainted
    2. Sends buffer data to the display device
  2. DoDisplayComposition handles the composition logic of the display, which consists of two steps
    1. DoComposeSurfaces: The processing varies depending on the type of Layer (Layer and its subclasses) and the compositing method (client compositing or HWC compositing)
    2. QueueBuffer: queues finished buffers
  3. PostFramebuffer Takes buffer data to the display device, where a Fence synchronization occurs

So far, the work flow of SurfaceFlinger synthesis is simply concluded, but we are still not clear about the details of HWC and Layer. Although we have understood some processes of SurfaceFlinger synthesis, we do not seem to fully understand these contents. Further study is still needed.

The synthesis work of SurfaceFlinger is much more complicated than the preparation work before synthesis, especially some other knowledge points involved in it. If these knowledge points are not clear, it is difficult to understand the specific logic of SurfaceFlinger synthesis. Therefore, I took out other knowledge points involved in SurfaceFlinger synthesis separately and compared them with the specific process of SurfaceFlinger synthesis. It was easier to understand the synthesis principle of SurfaceFlinger by splitting and combining them in this way

  • Display ->getRenderSurface()->dequeueBuffer and display->getRenderSurface()->queueBuffer. It involves SurfaceFlinger’s producer and consumer model, the creation process of GraphicBuffer, and the ION mechanism of the graphics system in The Android system. These three points are increasingly difficult, and it is necessary to learn from one example and constantly read back the code for verification

    • [Producers and consumers in SurfaceFlinger]
    • [GraphicBuffer creation process]
    • [ION mechanism of Android graphics system]
  • And then the processing of different layers, such as prepareClientLayer what does different layers do,

  • Software synthesis and hardware synthesis, HWC working principle

  • Next is the drawLayers of GLESRenderEngine, which involves OpenGL in Android graphics system, but I don’t plan to have a deep understanding of OpenGL at present. One is that there is a threshold to learn this thing, and the other is that for application engineers, it takes a lot of effort to learn this. Not as good as people in the game field who specialize in this, so OpenGL is easy to understand.

  • Finally, there is the postFramebuffer function, which covers one of the more important parts of the Android graphics system.

At this point, the logic of the compositing has been sorted out, but I don’t know if I’m lucky or unlucky to get so many things out of the way, but this is almost half of the important content in the Android graphics system. Let’s take a look at SurfaceFlinger’s compositing