This article continues with the logic of setFrameCallback to see what’s going on in the Threade Renderer.

Let’s continue to examine the logic of the following code snippet:

void draw(View view, AttachInfo attachInfo, DrawCallbacks callbacks, FrameDrawingCallback frameDrawingCallback) { attachInfo.mIgnoreDirtyState = true; final Choreographer choreographer = attachInfo.mViewRootImpl.mChoreographer; choreographer.mFrameInfo.markDrawStart(); updateRootDisplayList(view, callbacks); attachInfo.mIgnoreDirtyState = false; if (attachInfo.mPendingAnimatingRenderNodes ! = null) { final int count = attachInfo.mPendingAnimatingRenderNodes.size(); for (int i = 0; i < count; i++) { registerAnimatingRenderNode( attachInfo.mPendingAnimatingRenderNodes.get(i)); } attachInfo.mPendingAnimatingRenderNodes.clear(); attachInfo.mPendingAnimatingRenderNodes = null; } final long[] frameInfo = choreographer.mFrameInfo.mFrameInfo; if (frameDrawingCallback ! = null) { nSetFrameCallback(mNativeProxy, frameDrawingCallback); } int syncResult = nSyncAndDrawFrame(mNativeProxy, frameInfo, frameInfo.length); if ((syncResult & SYNC_LOST_SURFACE_REWARD_IF_FOUND) ! = 0) { setEnabled(false); attachInfo.mViewRootImpl.mSurface.release(); attachInfo.mViewRootImpl.invalidate(); } if ((syncResult & SYNC_INVALIDATE_REQUIRED) ! = 0) { attachInfo.mViewRootImpl.invalidate(); }}Copy the code

And then after the draw method of the ThreadedRenderer traverses the View tree it sets a callback FrameCallback at the bottom.

static void android_view_ThreadedRenderer_setFrameCallback(JNIEnv* env,
        jobject clazz, jlong proxyPtr, jobject frameCallback) {
    RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
    if (!frameCallback) {
        ...
    } else {
        JavaVM* vm = nullptr;
        auto globalCallbackRef = std::make_shared<JGlobalRefHolder>(vm,
                env->NewGlobalRef(frameCallback));
        proxy->setFrameCallback([globalCallbackRef](int64_t frameNr) {
            JNIEnv* env = getenv(globalCallbackRef->vm());
            env->CallVoidMethod(globalCallbackRef->object(), gFrameDrawingCallback.onFrameDraw,
                    static_cast<jlong>(frameNr));
        });
    }
}
Copy the code

RenderProxy setFrameCallback

void RenderProxy::setFrameCallback(std::function<void(int64_t)>&& callback) {
    mDrawFrameTask.setFrameCallback(std::move(callback));
}
Copy the code

And it’s very simple to actually set this method to the mFrameCallback method pointer of the DrawFrameTask.

ThreadedRenderer nSyncAndDrawFrame

The previous steps are just preparation for drawing, but it’s at the beginning of this step that you really start drawing layers.

static int android_view_ThreadedRenderer_syncAndDrawFrame(JNIEnv* env, jobject clazz,
        jlong proxyPtr, jlongArray frameInfo, jint frameInfoSize) {
    RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
    env->GetLongArrayRegion(frameInfo, 0, frameInfoSize, proxy->frameInfo());
    return proxy->syncAndDrawFrame();
}
Copy the code

The RenderProxy’s syncAndDrawFrame method is actually called. It’s actually very simple, just calling the drawFrame method of MdrawFrame Ask.

int RenderProxy::syncAndDrawFrame() {
    return mDrawFrameTask.drawFrame();
}
Copy the code

DrawFrameTask drawFrame

int DrawFrameTask::drawFrame() {
    mSyncResult = SyncResult::OK;
    mSyncQueued = systemTime(CLOCK_MONOTONIC);
    postAndWait();

    return mSyncResult;
}

void DrawFrameTask::postAndWait() {
    AutoMutex _lock(mLock);
    mRenderThread->queue().post([this]() { run(); });
    mSignal.wait(mLock);
}
Copy the code

In this process, you can see that a run method is actually thrown into the mRenderThread’s task queue for queuing, while blocking the current thread via mSignal. When will the thread be released?

void DrawFrameTask::unblockUiThread() {
    AutoMutex _lock(mLock);
    mSignal.signal();
}
Copy the code

When a draw operation is completed, unblockUiThread is called to release the blocking of the current thread. The core methods are as follows:

DrawFrameTask run

void DrawFrameTask::run() {
    bool canUnblockUiThread;
    bool canDrawThisFrame;
    {
        TreeInfo info(TreeInfo::MODE_FULL, *mContext);
        canUnblockUiThread = syncFrameState(info);
        canDrawThisFrame = info.out.canDrawThisFrame;

        if (mFrameCompleteCallback) {
            mContext->addFrameCompleteListener(std::move(mFrameCompleteCallback));
            mFrameCompleteCallback = nullptr;
        }
    }

    CanvasContext* context = mContext;
    std::function<void(int64_t)> callback = std::move(mFrameCallback);
    mFrameCallback = nullptr;

    if (canUnblockUiThread) {
        unblockUiThread();
    }

    if (CC_UNLIKELY(callback)) {
        context->enqueueFrameWork([callback, frameNr = context->getFrameNumber()]() {
            callback(frameNr);
        });
    }

    if (CC_LIKELY(canDrawThisFrame)) {
        context->draw();
    } else {
        context->waitOnFences();
    }

    if (!canUnblockUiThread) {
        unblockUiThread();
    }
}
Copy the code

This can be broken down into several steps:

  • 1. Add a drawn callback to the CanvasContext via addFrameCompleteListener
  • 2. Call syncFrameState for Layer processing to determine whether the UI thread needs to be blocked.
  • 2. If you do not need to block the UI thread, call unblockUiThread before drawing to close the blocking of the UI thread.
  • 3. Callback the mFrameCallback listener added by the Java layer via the enqueueFrameWork callback.
  • 4. If the current frame can be drawn, call the Draw of CanvasContext; otherwise, the hardware’s draw fence is not released and the fence needs to wait.
  • 5. If the thread needs to be blocked, it has not been blocked before. In this case, you need to release the thread.

You can see that there are two core methods:

  • 1. SyncFrameState prepares to render the tree
  • 2. Draw method of CanvasContext.

These two methods are the real behavior of rendering, as long as you understand these two behavior can understand the entire hardware rendering process.

DrawFrameTask syncFrameState

The syncFrameState method has actually been talked about quite a bit.

In this case, instead of TextureView doing the drawing itself, it’s more preparation.

bool DrawFrameTask::syncFrameState(TreeInfo& info) { int64_t vsync = mFrameInfo[static_cast<int>(FrameInfoIndex::Vsync)]; mRenderThread->timeLord().vsyncReceived(vsync); bool canDraw = mContext->makeCurrent(); mContext->unpinImages(); for (size_t i = 0; i < mLayers.size(); i++) { mLayers[i]->apply(); } mLayers.clear(); mContext->setContentDrawBounds(mContentDrawBounds); mContext->prepareTree(info, mFrameInfo, mSyncQueued, mTargetNode); . return info.prepareTextures; }Copy the code

It can be roughly divided into three steps:

  • 1.CanvasContext calls makeCurrent to create a run context for OpenGL.
bool CanvasContext::makeCurrent() { if (mStopped) return false; auto result = mRenderPipeline->makeCurrent(); . return true; }Copy the code

This is simply a call to render pipeline’s makeCurrent, in this case OpenGLPipeline’s makeCurrent. The makeCurrent of OpenGLPipeline is actually the makeCurrent of EglManager.

bool EglManager::makeCurrent(EGLSurface surface, EGLint* errOut) { if (isCurrent(surface)) return false; if (surface == EGL_NO_SURFACE) { // Ensure we always have a valid surface & context surface = mPBufferSurface; } if (! eglMakeCurrent(mEglDisplay, surface, surface, mEglContext)) { ... } mCurrentSurface = surface; if (Properties::disableVsync) { eglSwapInterval(mEglDisplay, 0); } return true; }Copy the code

It’s very simple to call OpenGL’s eglMakeCurrent method.

  • 2.CanvasContext calls unpinImages. This method actually calls the unpinImages method of render pipeline. The unpinImages method takes all the previous texture caches and resets them.
  • 3. Process the Layer logic stored in mDisplayList. In TextureView, Skia flag bit is opened, and due to the particularity of TextureView itself, OpenGL rendering process needs to be carried out on the App side, so we need to press the DeferredLaydater Layer object to refresh the image.
  • 4.CanvasContext prepareTree Makes final preparations before drawing. This method ultimately determines whether to block the UI thread.

CanvasContext prepareTree

void CanvasContext::prepareTree(TreeInfo& info, int64_t* uiFrameInfo, int64_t syncQueued, RenderNode* target) { mRenderThread.removeFrameCallback(this); . mCurrentFrameInfo->importUiThreadInfo(uiFrameInfo); mCurrentFrameInfo->set(FrameInfoIndex::SyncQueued) = syncQueued; mCurrentFrameInfo->markSyncStart(); info.damageAccumulator = &mDamageAccumulator; info.layerUpdateQueue = &mLayerUpdateQueue; mAnimationContext->startFrame(info.mode); mRenderPipeline->onPrepareTree(); for (const sp<RenderNode>& node : mRenderNodes) { info.mode = (node.get() == target ? TreeInfo::MODE_FULL : TreeInfo::MODE_RT_ONLY); node->prepareTree(info); } mAnimationContext->runRemainingAnimations(info); freePrefetchedLayers(); mIsDirty = true; . }Copy the code
  • 1. OnPrepareTree callback for mRenderPipeline
  • 2. Call startFrame mAnimationContext
  • 3. Call the prepareTree method of RenderNode stored in the mRenderNodes collection.
  • 4.mAnimationContext runRemainingAnimations

Let’s ignore the logic of animation and look at point three. Remember that in the constructor of the CanvasContext, we first save the root RenderNode to the mrenderNodes.

In other words, all renderNodes are executed from prepareTree of RootRenderNode.

RootRenderNode prepareTree

    virtual void prepareTree(TreeInfo& info) override {
        info.errorHandler = this;

        for (auto& anim : mRunningVDAnimators) {

            anim->getVectorDrawable()->markDirty();
        }
        if (info.mode == TreeInfo::MODE_FULL) {
            for (auto &anim : mPausedVDAnimators) {
                anim->getVectorDrawable()->setPropertyChangeWillBeConsumed(false);
                anim->getVectorDrawable()->markDirty();
            }
        }
        info.updateWindowPositions = true;
        RenderNode::prepareTree(info);
        info.updateWindowPositions = false;
        info.errorHandler = nullptr;
    }
Copy the code
  • 1. The first process is added to the RootRenderNode mRunningVDAnimators set, this collection are actually ViewRootImpl through registerAnimatingRenderNode add vector animations. Set the collection to dirty and tell it to render. There is not much discussion here.
  • 2. Call the prepareTree method of RenderNode.

In the prepareTree method, all the contents of the View tree are iterated into a sequence of frames.

RenderNode prepareTree
void RenderNode::prepareTree(TreeInfo& info) { MarkAndSweepRemoved observer(&info); bool functorsNeedLayer = Properties::debugOverdraw && ! Properties::isSkiaEnabled(); prepareTreeImpl(observer, info, functorsNeedLayer); }Copy the code

The core calls prepareTreeImpl.

RenderNode prepareTreeImpl
void RenderNode::prepareTreeImpl(TreeObserver& observer, TreeInfo& info, bool functorsNeedLayer) { info.damageAccumulator->pushTransform(this); if (info.mode == TreeInfo::MODE_FULL) { pushStagingPropertiesChanges(info); } uint32_t animatorDirtyMask = 0; . bool willHaveFunctor = false; if (info.mode == TreeInfo::MODE_FULL && mStagingDisplayList) { willHaveFunctor = mStagingDisplayList->hasFunctor(); } else if (mDisplayList) { .... }... if (CC_UNLIKELY(mPositionListener.get())) { mPositionListener->onPositionUpdated(*this, info); } prepareLayer(info, animatorDirtyMask); if (info.mode == TreeInfo::MODE_FULL) { pushStagingDisplayListChanges(observer, info); } if (mDisplayList) { info.out.hasFunctors |= mDisplayList->hasFunctor(); bool isDirty = mDisplayList->prepareListAndChildren( observer, info, childFunctorsNeedLayer, [](RenderNode* child, TreeObserver& observer, TreeInfo& info, bool functorsNeedLayer) { child->prepareTreeImpl(observer, info, functorsNeedLayer); }); if (isDirty) { damageSelf(info); } } pushLayerUpdate(info); info.damageAccumulator->popTransform(); }Copy the code

Remember that the mode of the TreeInfo is TreeInfo::MODE_FULL.

  • 1. First, pushTransform the current renderNode object into a DirtyStack in TreeInfo
  • 2. Determine whether hasFunctor in DisplayList is true, that is, whether the set of functors in DisplayList is empty. Functors are actually added via the following method of the Java layer’s DisplayListCanvas:
    public void drawGLFunctor2(long drawGLFunctor, @Nullable Runnable releasedCallback) {
        nCallDrawGLFunction(mNativeCanvasWrapper, drawGLFunctor, releasedCallback);
    }
Copy the code
  • 3. The callback position listens for onPositionUpdated of mPositionListener

  • 4. Call pushStagingDisplayListChanges method, and this method is the way the DisplayList functors pointer execution time.

void RenderNode::damageSelf(TreeInfo& info) { if (isRenderable()) { if (properties().getClipDamageToBounds()) { info.damageAccumulator->dirty(0, 0, properties().getWidth(), properties().getHeight()); } else { info.damageAccumulator->dirty(DIRTY_MIN, DIRTY_MIN, DIRTY_MAX, DIRTY_MAX); } } } void RenderNode::pushStagingDisplayListChanges(TreeObserver& observer, TreeInfo& info) { if (mNeedsDisplayListSync) { mNeedsDisplayListSync = false; damageSelf(info); syncDisplayList(observer, &info); damageSelf(info); }}Copy the code
  • 3. The callback position listens for onPositionUpdated of mPositionListener

  • 4. Call pushStagingDisplayListChanges method, and this method is the way the DisplayList functors pointer execution time.

void RenderNode::damageSelf(TreeInfo& info) { if (isRenderable()) { if (properties().getClipDamageToBounds()) { info.damageAccumulator->dirty(0, 0, properties().getWidth(), properties().getHeight()); } else { info.damageAccumulator->dirty(DIRTY_MIN, DIRTY_MIN, DIRTY_MAX, DIRTY_MAX); } } } void RenderNode::pushStagingDisplayListChanges(TreeObserver& observer, TreeInfo& info) { if (mNeedsDisplayListSync) { mNeedsDisplayListSync = false; damageSelf(info); syncDisplayList(observer, &info); damageSelf(info); }}Copy the code

Call the damageSelf method to update the info dirty area. The RenderNode properties object holds the width and height of the RenderNode as the dirty area and updates the TreeInfo to record the dirty area.

  • 5. Execute the prepareListAndChildren method of mDisplayList, and finally call prepareTreeImpl of all child RenderNodes stored in DisplayList in the callback.

RenderNode syncDisplayList

void RenderNode::syncDisplayList(TreeObserver& observer, TreeInfo* info) { if (mStagingDisplayList) { mStagingDisplayList->updateChildren([](RenderNode* child) { child->incParentRefCount(); }); } deleteDisplayList(observer, info); mDisplayList = mStagingDisplayList; mStagingDisplayList = nullptr; if (mDisplayList) { mDisplayList->syncContents(); }}Copy the code
  • 1. MStagingDisplayList updateChildren, which increases the reference count for each child

  • 2. Assign mStagingDisplayList to mDisplayList and call syncContents of mDisplayList.

void DisplayList::syncContents() { for (auto& iter : functors) { (*iter.functor)(DrawGlInfo::kModeSync, nullptr); } for (auto& vectorDrawable : vectorDrawables) { vectorDrawable->syncProperties(); }}Copy the code

Execute each GL method pointer passed down from the Java layer, and then process each vector Drawable.

DisplayList prepareListAndChildren

bool DisplayList::prepareListAndChildren(
        TreeObserver& observer, TreeInfo& info, bool functorsNeedLayer,
        std::function<void(RenderNode*, TreeObserver&, TreeInfo&, bool)> childFn) {
    info.prepareTextures = info.canvasContext.pinImages(bitmapResources);

    for (auto&& op : children) {
        RenderNode* childNode = op->renderNode;
        info.damageAccumulator->pushTransform(&op->localMatrix);
        bool childFunctorsNeedLayer =
                functorsNeedLayer;  
        childFn(childNode, observer, info, childFunctorsNeedLayer);
        info.damageAccumulator->popTransform();
    }

    bool isDirty = false;
    for (auto& vectorDrawable : vectorDrawables) {
        if (vectorDrawable->isDirty()) {
            isDirty = true;
        }
        vectorDrawable->setPropertyChangeWillBeConsumed(true);
    }
    return isDirty;
}
Copy the code

PushTransform Adds the current RenderNode to the dirty stack, and then calls the prepareTreeImpl method for each child View. DamageAccumulator’s popTransform is called to RenderNode because the total dirty area of the child RenderNode and its descendant RenderNode has been calculated.

Finally, note that info.prepareTextures this bool is the result of the pinImages method of the CanvasContext. The result of info.prepareTextures determines whether or not the UI thread should block when ThreadedRender draws.

OpenGLPipeline::pinImages

Its core is actually called OpenGLPipeline: : pinImages

bool OpenGLPipeline::pinImages(LsaVector<sk_sp<Bitmap>>& images) {
    TextureCache& cache = Caches::getInstance().textureCache;
    bool prefetchSucceeded = true;
    for (auto& bitmapResource : images) {
        prefetchSucceeded &= cache.prefetchAndMarkInUse(this, bitmapResource.get());
    }
    return prefetchSucceeded;
}
Copy the code

Native saves texture ids from all previously used Bitmap resources via TextureCache. PrefetchSucceeded is false only if the cached Bitmap’s own texture is removed or if the requested texture fails (for example, the requested image texture is too large).

In conclusion, whether the synchronization operation can be carried out depends on whether the hardware rendered Bitmap cache is still in effect. If the cache is in effect, it means that the UI thread can not be blocked. If the original cache fails, the UI thread and hardware rendering execution must be consistent in order to avoid fragmentation.

Responsibilities of DamageAccumulator in TreeInfo

You can see that in TreeInfo there are roughly three steps:

  • 1. DamageAccumulator. PushTransform pressure changes into RenderNodeop localMatrix matrix.
  • 2. The method pointer to childFn actually refers to the prepareTree method of the child RenderNode:
child->prepareTreeImpl(observer, info, functorsNeedLayer);
Copy the code
  • 3.DamageAccumulator.popTransform.

What does DamageAccumulator do in these three steps?

void DamageAccumulator::pushTransform(const RenderNode* transform) { pushCommon(); mHead->type = TransformRenderNode; mHead->renderNode = transform; } void DamageAccumulator::pushCommon() { if (! mHead->next) { DirtyStack* nextFrame = mAllocator.create_trivial<DirtyStack>(); nextFrame->next = nullptr; nextFrame->prev = mHead; mHead->next = nextFrame; } mHead = mHead->next; mHead->pendingDirty.setEmpty(); } void DamageAccumulator::popTransform() { LOG_ALWAYS_FATAL_IF(mHead->prev == mHead, "Cannot pop the root frame!" ); DirtyStack* dirtyFrame = mHead; mHead = mHead->prev; switch (dirtyFrame->type) { case TransformRenderNode: applyRenderNodeTransform(dirtyFrame); break; case TransformMatrix4: .... break; case TransformNone: ... break; default: LOG_ALWAYS_FATAL("Tried to pop an invalid type: %d", dirtyFrame->type); }}Copy the code

The process of pushing is essentially applying for a new DirtyStack at the end of the DirtyStack’s linked list, which is a two-way pointer. Update mHead to tail. Set type to TransformRenderNode and transform to RenderNode.

Each time the POP operation is called, the last DirtyStack is found through the precursor pointer to the mHead, and the applyRenderNodeTransform method is executed taking the current RenderNode as an argument.

applyRenderNodeTransform
static inline void mapRect(const Matrix4* matrix, const SkRect& in, SkRect* out) { if (in.isEmpty()) return; Rect temp(in); if (CC_LIKELY(! matrix->isPerspective())) { matrix->mapRect(temp); } else { temp.set(DIRTY_MIN, DIRTY_MIN, DIRTY_MAX, DIRTY_MAX); } out->join(RECT_ARGS(temp)); } void DamageAccumulator::applyRenderNodeTransform(DirtyStack* frame) { if (frame->pendingDirty.isEmpty()) { return; } const RenderProperties& props = frame->renderNode->properties(); if (props.getAlpha() <= 0) { return; } if (props.getClipDamageToBounds() && ! frame->pendingDirty.isEmpty()) { if (! frame->pendingDirty.intersect(0, 0, props.getWidth(), props.getHeight())) { frame->pendingDirty.setEmpty(); } } mapRect(props, frame->pendingDirty, &mHead->pendingDirty); if (props.getProjectBackwards() && ! frame->pendingDirty.isEmpty()) { ... }}Copy the code

Remember that when the applyRenderNodeTransform method is used, the mHead is already pointing to the precursor’s DirtyStack, and the argument is the current RenderNode object’s DirtyStack. So it’s actually quite simple:

  • 1. First get the properties of the current RenderNode and crop out the current dirty area using intersection calculation.
  • 2. Use the mapRect method to set pendingDirty in the DirtyStack of the current RenderNode to pendingDirty in the precursor node.

In this recursive way, RenderNode at the lower level and RenderNode at the upper level are used to calculate the intersection and finally find the area to be drawn.

RenderNode pushLayerUpdate

Back in RenderNode’s prepareTreeImpl method, pushLayerUpdate is called when all View display levels have been iterated through RenderNode’s prepareTree step.

void RenderNode::pushLayerUpdate(TreeInfo& info) { LayerType layerType = properties().effectiveLayerType(); if (CC_LIKELY(layerType ! = LayerType::RenderLayer) || CC_UNLIKELY(! isRenderable()) || CC_UNLIKELY(properties().getWidth() == 0) || CC_UNLIKELY(properties().getHeight() == 0) || CC_UNLIKELY(! properties().fitsOnLayer())) { if (CC_UNLIKELY(hasLayer())) { renderthread::CanvasContext::destroyLayer(this); } return; } if (info.canvasContext.createOrUpdateLayer(this, *info.damageAccumulator, info.errorHandler)) { damageSelf(info); } if (! hasLayer()) { return; } SkRect dirty; info.damageAccumulator->peekAtDirty(&dirty); info.layerUpdateQueue->enqueueLayerWithDamage(this, dirty); info.canvasContext.markLayerInUse(this); }Copy the code

In this process, TreeInfo calculates the dirty areas to be refreshed using the prepareTree method.

  • 1. The createOrUpdateLayer method of CanvasContext creates the drawing Layer for each RenderNode.
  • 2. First obtain the dirty area of mHead in damageAccumulator through peekAtDirty, which is the top dirty area calculated from prepareTree before. EnqueueLayerWithDamage is then used to calculate the size of the area to be refreshed for the current Layer and the RenderNode to be refreshed.

RenderNode is not a RenderLayer type. Rendernodes actually come in three types in the native layer:

enum class LayerType {
    None = 0,
    Software = 1,
    RenderLayer = 2,
};
Copy the code

Only RenderLayer represents hardware rendering, and only layerTypes with RenderLayer turned on will start building an off-screen rendering memory.

Of course, these three also correspond to the three flag bits of the Java layer, which have appeared several times in previous articles:

    @ViewDebug.ExportedProperty(category = "drawing", mapping = {
            @ViewDebug.IntToString(from = LAYER_TYPE_NONE, to = "NONE"),
            @ViewDebug.IntToString(from = LAYER_TYPE_SOFTWARE, to = "SOFTWARE"),
            @ViewDebug.IntToString(from = LAYER_TYPE_HARDWARE, to = "HARDWARE")
    })
    int mLayerType = LAYER_TYPE_NONE;
Copy the code

The default is LAYER_TYPE_NONE.

LayerUpdateQueue enqueueLayerWithDamage

void LayerUpdateQueue::enqueueLayerWithDamage(RenderNode* renderNode, Rect damage) {
    damage.roundOut();
    damage.doIntersect(0, 0, renderNode->getWidth(), renderNode->getHeight());
    if (!damage.isEmpty()) {
        for (Entry& entry : mEntries) {
            if (CC_UNLIKELY(entry.renderNode == renderNode)) {
                entry.damage.unionWith(damage);
                return;
            }
        }
        mEntries.emplace_back(renderNode, damage);
    }
}
Copy the code

The intersection between the current dirty area and the current RenderNode area is calculated first. If the calculated dirty area is not empty and the underlying RenderNode recorded in the LayerUpdateQueue Entry is the same as the current one, the size of the flush area is increased and returned.

If it is a new RenderNode, it is recorded in the mEntries as a new RenderNode that needs to be refreshed.

CanvasContext renders related objects off-screen

Before we get to that, let’s take a look at a few important ones:

  • RenderState Specifies the state of the renderer
  • 2.OffscreenBufferPool Memory application pool for off-screen rendering
  • 3.OffscreenBuffer Off-screen rendering memory

RenderState

class RenderState {
...

private:
...
    renderthread::RenderThread& mRenderThread;
    Caches* mCaches = nullptr;

    Blend* mBlend = nullptr;
    MeshState* mMeshState = nullptr;
    Scissor* mScissor = nullptr;
    Stencil* mStencil = nullptr;

    OffscreenBufferPool* mLayerPool = nullptr;

    std::set<Layer*> mActiveLayers;
    std::set<DeferredLayerUpdater*> mActiveLayerUpdaters;
    std::set<renderthread::CanvasContext*> mRegisteredContexts;

    GLsizei mViewportWidth;
    GLsizei mViewportHeight;
    GLuint mFramebuffer;

    pthread_t mThreadId;
};
Copy the code

You can see that in RenderState, there are several core objects.

  • 1.Caches About OpengL-related Caches
  • 2.OffscreenBufferPool Specifies the offscreen rendering buffer pool
  • 3.Layer set is the carrier Layer for drawing pixel memory
  • 4. Deferredlaydater deferred drawing collection, generally referring to the collection of TextureLayer in TextureView
  • 5. MViewportWidth and mViewportHeight size of the entire window
  • 6. MFramebuffer OpenGL ID of the off-screen rendering cache

OffscreenBufferPool

class OffscreenBufferPool {
...

private:
    struct Entry {
        Entry() {}

        Entry(const uint32_t layerWidth, const uint32_t layerHeight, bool wideColorGamut)
                : width(OffscreenBuffer::computeIdealDimension(layerWidth))
                , height(OffscreenBuffer::computeIdealDimension(layerHeight))
                , wideColorGamut(wideColorGamut) {}

        explicit Entry(OffscreenBuffer* layer)
                : layer(layer)
                , width(layer->texture.width())
                , height(layer->texture.height())
                , wideColorGamut(layer->wideColorGamut) {}

...

        OffscreenBuffer* layer = nullptr;
        uint32_t width = 0;
        uint32_t height = 0;
        bool wideColorGamut = false;
    };  // struct Entry

    std::multiset<Entry> mPool;

    uint32_t mSize = 0;
    uint32_t mMaxSize;
};  
Copy the code

You can see that in OffscreenBufferPool there is an Entry collection of multisets (sets that are guaranteed to be ordered and repeated in c++). This Entry object holds OffscreenBuffer objects that the caller can request from mPool.

The following method is called whenever we need to get a new OffscreenBuffer object:

OffscreenBuffer* OffscreenBufferPool::get(RenderState& renderState, const uint32_t width, const uint32_t height, bool wideColorGamut) { OffscreenBuffer* layer = nullptr; Entry entry(width, height, wideColorGamut); auto iter = mPool.find(entry); if (iter ! = mPool.end()) { entry = *iter; mPool.erase(iter); layer = entry.layer; layer->viewportWidth = width; layer->viewportHeight = height; mSize -= layer->getSizeInBytes(); } else { layer = new OffscreenBuffer(renderState, Caches::getInstance(), width, height, wideColorGamut); } return layer; }Copy the code

First, the width, height and color mode is entry. Through entry, we try to find whether there is cache in mPool. If there is cache, it will be removed from mPool and the corresponding OffscreenBuffer will be returned.

When OffscreenBuffer is used, the following method will be called to reclaim OffscreenBuffer:

void OffscreenBufferPool::putOrDelete(OffscreenBuffer* layer) { const uint32_t size = layer->getSizeInBytes(); if (size < mMaxSize) { // TODO: Use an LRU while (mSize + size > mMaxSize) { OffscreenBuffer* victim = mPool.begin()->layer; mSize -= victim->getSizeInBytes(); delete victim; mPool.erase(mPool.begin()); } // clear region, since it's no longer valid layer->region.clear(); Entry entry(layer); mPool.insert(entry); mSize += size; } else { delete layer; }}Copy the code

This essentially adds OffscreenBuffer back to mPool and checks to see if the size needs to be reduced in the entire mPool pool.

In fact, this is a very classic meta-design design pattern, and we often use it in development. The advantage of doing this is to loop through memory.

OffscreenBuffer
class OffscreenBuffer : GpuMemoryTracker { public: OffscreenBuffer(RenderState& renderState, Caches& caches, uint32_t viewportWidth, uint32_t viewportHeight, bool wideColorGamut = false); . RenderState& renderState; uint32_t viewportWidth; uint32_t viewportHeight; Texture texture; bool wideColorGamut = false; Region region; Matrix4 inverseTransformInWindow; GLsizei elementCount = 0; GLuint vbo = 0; bool hasRenderedSinceRepaint; };Copy the code

You can see that the OffscreenBuffer object actually holds the following core objects:

  • 1. Size of the current off-screen cache object in viewportWidth and viewportHeight
  • Texture object is basically OpenGL’s Texture object
  • 3. InverseTransformInWindow a texture matrix transformation
  • Vbo vertex cache object

Layer

class Layer : public VirtualLightRefBase, GpuMemoryTracker { public: enum class Api { OpenGL = 0, Vulkan = 1, }; Api getApi() const { return mApi; }... protected: ... private: void buildColorSpaceWithFilter(); Api mApi; sk_sp<SkColorFilter> mColorFilter; android_dataspace mCurrentDataspace = HAL_DATASPACE_UNKNOWN; sk_sp<SkColorFilter> mColorSpaceWithFilter; bool forceFilter = false; int alpha; SkBlendMode mode; mat4 texTransform; mat4 transform; }; // struct LayerCopy the code

You can see Layer and it’s actually a simple object, Layer essentially just has transparency, whether to turn on blend mode, color filter, transformation matrix, etc. In fact, instead of using the Layer structure directly, we use a Layer object with a more specific meaning like GlLayer.

CanvasContext createOrUpdateLayer

This method calls the createOrUpdateLayer method of the render pipe.

bool OpenGLPipeline::createOrUpdateLayer(RenderNode* node, const DamageAccumulator& damageAccumulator, bool wideColorGamut, ErrorHandler* errorHandler) { RenderState& renderState = mRenderThread.renderState(); OffscreenBufferPool& layerPool = renderState.layerPool(); bool transformUpdateNeeded = false; if (node->getLayer() == nullptr) { node->setLayer( layerPool.get(renderState, node->getWidth(), node->getHeight(), wideColorGamut)); transformUpdateNeeded = true; } else if (! layerMatchesWH(node->getLayer(), node->getWidth(), node->getHeight())) { if (node->properties().fitsOnLayer()) { node->setLayer(layerPool.resize(node->getLayer(), node->getWidth(), node->getHeight())); } else { destroyLayer(node); } transformUpdateNeeded = true; } if (transformUpdateNeeded && node->getLayer()) { Matrix4 windowTransform; damageAccumulator.computeCurrentTransform(&windowTransform); node->getLayer()->setWindowTransform(windowTransform); } if (! node->hasLayer()) { ... } return transformUpdateNeeded; }Copy the code

After looking at the core objects above, you can see that this step is actually to set up an OffscreenBuffer for each RenderNode, which is mainly used for texture objects. After this step, RenderNode has the ability to draw the image.

So that’s it for the syncFrameState method of the CanvasContext.

CanvasContext draw

void CanvasContext::draw() { SkRect dirty; mDamageAccumulator.finish(&dirty); . Frame frame = mRenderPipeline->getFrame(); SkRect windowDirty = computeDirtyRect(frame, &dirty); bool drew = mRenderPipeline->draw(frame, windowDirty, dirty, mLightGeometry, &mLayerUpdateQueue, mContentDrawBounds, mOpaque, mWideColorGamut, mLightInfo, mRenderNodes, &(profiler())); int64_t frameCompleteNr = mFrameCompleteCallbacks.size() ? getFrameNumber() : -1; waitOnFences(); bool requireSwap = false; bool didSwap = mRenderPipeline->swapBuffers(frame, drew, windowDirty, mCurrentFrameInfo, &requireSwap); mIsDirty = false; if (requireSwap) { if (! didSwap) { setSurface(nullptr); }... } else { ... }... if (didSwap) { for (auto& func : mFrameCompleteCallbacks) { std::invoke(func, frameCompleteNr); } mFrameCompleteCallbacks.clear(); }... }Copy the code

In fact, the draw of CanvasContext can be divided into the following steps:

  • 1. DamageAccumulator. Finish get dirty area need to render
  • 2. Obtain the Frame object containing the Surface through getFrame, and assign the dirty area calculated by DamageAccumulator to the Frame through computeDirtyRect.
  • 3. The render pipe calls the Draw method to start rendering
  • 4. WaitOnFences waiting for OpenGL to render the painted fence free
  • 5. Call swapBuffers of the render pipeline and send the GrapBuffer in the Surface to SF to start rendering
  • 6. If rendering fails, set the Surface of CanvasContext to NULL
  • 7. Set render completion listening before callback.

DamageAccumulator::finish

void DamageAccumulator::finish(SkRect* totalDirty) {
    *totalDirty = mHead->pendingDirty;
    totalDirty->roundOut(totalDirty);
    mHead->pendingDirty.setEmpty();
}
Copy the code

Simply assign the size of the dirty area in mHead to SkRect.

CanvasContext gets the area to render this time

Let’s start with the Frame header for a key object:

class Frame {
public:
....

private:
    Frame() {}
    friend class EglManager;

    int32_t mWidth;
    int32_t mHeight;
    int32_t mBufferAge;

    EGLSurface mSurface;
};
Copy the code

It’s actually quite simple to see that there are width and height parameters corresponding to the current EGLSurface and the EGLSurface object.

GetFrame Gets the Frame object
Frame OpenGLPipeline::getFrame() {
    return mEglManager.beginFrame(mEglSurface);
}
Copy the code
Frame EglManager::beginFrame(EGLSurface surface) {
    makeCurrent(surface);
    Frame frame;
    frame.mSurface = surface;
    eglQuerySurface(mEglDisplay, surface, EGL_WIDTH, &frame.mWidth);
    eglQuerySurface(mEglDisplay, surface, EGL_HEIGHT, &frame.mHeight);
    frame.mBufferAge = queryBufferAge(surface);
    eglBeginFrame(mEglDisplay, surface);
    return frame;
}
Copy the code

It’s easy to see. First makeCurrent sets the current EGLSurface as the main environment above and below the render. Assign the mSurface in the frame to the EGLSurface stored in the mEglManager, and use the eglQuerySurface to query the width and height of the mEglDisplay(render screen object) and return.

ComputeDirtyRect Computes the dirty area of the Frame
SkRect CanvasContext::computeDirtyRect(const Frame& frame, SkRect* dirty) { if (frame.width() ! = mLastFrameWidth || frame.height() ! = mLastFrameHeight) { dirty->setEmpty(); mLastFrameWidth = frame.width(); mLastFrameHeight = frame.height(); } else if (mHaveNewSurface || frame.bufferAge() == 0) { dirty->setEmpty(); } else { if (! dirty->isEmpty() && ! dirty->intersect(0, 0, frame.width(), frame.height())) { frame.width(), frame.height()); dirty->setEmpty(); } profiler().unionDirty(dirty); } if (dirty->isEmpty()) { dirty->set(0, 0, frame.width(), frame.height()); } SkRect windowDirty(*dirty); if (frame.bufferAge() > 1) { if (frame.bufferAge() > (int)mSwapHistory.size()) { dirty->set(0, 0, frame.width(), frame.height()); } else { for (int i = mSwapHistory.size() - 1; i > ((int)mSwapHistory.size()) - frame.bufferAge(); i--) { dirty->join(mSwapHistory[i].damage); } } } return windowDirty; }Copy the code

The process is actually quite simple, update mLastFrameWidth and mLastFrameHeight if you find that the width and height obtained from EGLSurface are different from the last one. If the Frame is created for the first time, the dirty section is set to NULL.

If the dirty area is empty, you need to set the width and height of the frame to the dirty area for global refresh.

If bufferAge is greater than 1, determine whether the current bufferAge is greater than mSwapHistory. The mSwapHistory object actually records the history of the current exchange success. If bufferAge is greater than mSwapHistory, it is the flush region of permissions, and the dirty region is set to global. Otherwise, the corresponding mSwapHistory child element is searched to compare the intersection with the current dirty region.

Finally, return to the dirty region.

Draw method for rendering pipes

Before getting into the specifics of the logic, there are a few key objects that need to be clarified.

FrameBuilder a FrameBuilder
class FrameBuilder : public CanvasStateClient { public: struct LightGeometry { Vector3 center; float radius; }; . LinearAllocator mAllocator; LinearStdAllocator<void*> mStdAllocator; // List of every deferred layer's render state. Replayed in reverse order to render a frame. LsaVector<LayerBuilder*> mLayerBuilders; LsaVector<size_t> mLayerStack; CanvasState mCanvasState; Caches& mCaches; float mLightRadius; const bool mDrawFbo0; };Copy the code

We’ll just focus on its properties here. You can see that the FrameBuilder contains the following core objects:

  • LinearAllocator LinearAllocator LinearStdAllocator LinearAllocator LinearStdAllocator LinearAllocator LinearStdAllocator LinearAllocator
  • 2. MLayerBuilders is a collection of LayerBuilder Layer builders
  • 3. MLayerStack refers to the Layer stack
  • 4. MCaches are OpenGL caches

There is another core object, the LayerBuilder Layer constructor

LayerBuilder Layer constructor
class LayerBuilder {viewportClip PREVENT_COPY_AND_ASSIGN(LayerBuilder); public: LayerBuilder(uint32_t width, uint32_t height, const Rect& repaintRect) : LayerBuilder(width, height, repaintRect, nullptr, nullptr){}; LayerBuilder(uint32_t width, uint32_t height, const Rect& repaintRect, const BeginLayerOp* beginLayerOp, RenderNode* renderNode); . const uint32_t width; const uint32_t height; const Rect repaintRect; const ClipRect repaintClip; OffscreenBuffer* offscreenBuffer; const BeginLayerOp* beginLayerOp; const RenderNode* renderNode; std::vector<BakedOpState*> activeUnclippedSaveLayers; private: ... std::vector<BatchBase*> mBatches; std::unordered_map<mergeid_t, MergingOpBatch*> mMergingBatchLookup[OpBatchType::Count]; OpBatch* mBatchLookup[OpBatchType::Count] = {nullptr}; std::vector<Rect> mClearRects; };Copy the code

LayerBuilder contains the core properties of width and height, renderNode, and OffscreenBuffer.

Several important new objects appear here, which I call the Draw operation batch:

  • 1.BatchBase
  • 2.MergingOpBatch
  • 3.mBatchLookup

In fact, these objects are all new operation objects converted by RecordOp. But instead of manipulating the RecordOp directly, they manipulate an object called BakedOpState.

BakedOpState
class BakedOpState { public: static BakedOpState* tryConstruct(LinearAllocator& allocator, Snapshot& snapshot, const RecordedOp& recordedOp); static BakedOpState* tryConstructUnbounded(LinearAllocator& allocator, Snapshot& snapshot, const RecordedOp& recordedOp); enum class StrokeBehavior { StyleDefined, }; static BakedOpState* tryStrokeableOpConstruct(LinearAllocator& allocator, Snapshot& snapshot, const RecordedOp& recordedOp, StrokeBehavior strokeBehavior, bool expandForPathTexture); static BakedOpState* tryShadowOpConstruct(LinearAllocator& allocator, Snapshot& snapshot, const ShadowOp* shadowOpPtr); static BakedOpState* directConstruct(LinearAllocator& allocator, const ClipRect* clip, const Rect& dstRect, const RecordedOp& recordedOp); . const float alpha; const RoundRectClipState* roundRectClipState; const RecordedOp* op; private: friend class LinearAllocator; BakedOpState(LinearAllocator& allocator, Snapshot& snapshot, const RecordedOp& recordedOp, bool expandForStroke, bool expandForPathTexture) : computedState(allocator, snapshot, recordedOp, expandForStroke, expandForPathTexture) , alpha(snapshot.alpha) , roundRectClipState(snapshot.roundRectClipState) , op(&recordedOp) {} BakedOpState(LinearAllocator& allocator, Snapshot& snapshot, const RecordedOp& recordedOp) : computedState(allocator, snapshot, recordedOp.localMatrix, recordedOp.localClip) , alpha(snapshot.alpha) , roundRectClipState(snapshot.roundRectClipState) , op(&recordedOp) {} BakedOpState(LinearAllocator& allocator, Snapshot& snapshot, const ShadowOp* shadowOpPtr) : computedState(allocator, snapshot) , alpha(snapshot.alpha) , roundRectClipState(snapshot.roundRectClipState) , op(shadowOpPtr) {} BakedOpState(const ClipRect* clipRect, const Rect& dstRect, const RecordedOp& recordedOp) : ComputedState (clipRect, dstRect), alpha(1.0f), roundRectClipState(NULlPTR), op(&recordedOp) {}};Copy the code

As you can see from the original file, BackStateOp actually contains the RecordedOp, which is the RecordedOp object generated by calling the action method of the Canvas in each View. Both transparency and clipping range are included.

So what’s the difference between BackStateOp and RecordedOp? RecordedOp (RecordedOp, RecordedOp, RecordedOp, RecordedOp, RecordedOp)

BatchBase
class BatchBase { public: BatchBase(batchid_t batchId, BakedOpState* op, bool merging) : mBatchId(batchId), mMerging(merging) { mBounds = op->computedState.clippedBounds; mOps.push_back(op); } bool intersects(const Rect& rect) const { if (! rect.intersects(mBounds)) return false; for (const BakedOpState* op : mOps) { if (rect.intersects(op->computedState.clippedBounds)) { return true; } } return false; } batchid_t getBatchId() const { return mBatchId; } bool isMerging() const { return mMerging; } const std::vector<BakedOpState*>& getOps() const { return mOps; }... protected: batchid_t mBatchId; Rect mBounds; std::vector<BakedOpState*> mOps; bool mMerging; };Copy the code

You can see that BatchBase actually contains a set of BakedOpState and the rendered region. However, BatchBase is rarely used directly but its derived classes OpBatch and MergingOpBatch are generally used.

OpBatch
class OpBatch : public BatchBase { public: OpBatch(batchid_t batchId, BakedOpState* op) : BatchBase(batchId, op, false) {} void batchOp(BakedOpState* op) { mBounds.unionWith(op->computedState.clippedBounds); mOps.push_back(op); }};Copy the code

Quite simply, OpBatch compared to BatchBase combines all the fields added to each draw operator in the mOps set and saves each draw operation BakedOpState.

MergingOpBatch
class MergingOpBatch : public BatchBase { public: MergingOpBatch(batchid_t batchId, BakedOpState* op) : BatchBase(batchId, op, true), mClipSideFlags(op->computedState.clipSideFlags) {} static inline bool checkSide(const int currentFlags, const int newFlags, const int side, float boundsDelta) { bool currentClipExists = currentFlags & side; bool newClipExists = newFlags & side; if (boundsDelta > 0 && currentClipExists) return false; if (boundsDelta < 0 && newClipExists) return false; return true; } static bool paintIsDefault(const SkPaint& paint) { return paint.getAlpha() == 255 && paint.getColorFilter() == nullptr  && paint.getShader() == nullptr; } static bool paintsAreEquivalent(const SkPaint& a, const SkPaint& b) { return a.getAlpha() == b.getAlpha() && a.getColorFilter() == b.getColorFilter() && a.getShader() == b.getShader(); } bool canMergeWith(BakedOpState* op) const { bool isTextBatch = getBatchId() == OpBatchType::Text || getBatchId() == OpBatchType::ColorText; if (! isTextBatch || PaintUtils::hasTextShadow(op->op->paint)) { if (intersects(op->computedState.clippedBounds)) return false; } const BakedOpState* lhs = op; const BakedOpState* rhs = mOps[0]; if (! MathUtils::areEqual(lhs->alpha, rhs->alpha)) return false; if (lhs->roundRectClipState ! = rhs->roundRectClipState) return false; if (lhs->computedState.localProjectionPathMask || rhs->computedState.localProjectionPathMask) return false; const int currentFlags = mClipSideFlags; const int newFlags = op->computedState.clipSideFlags; if (currentFlags ! = OpClipSideFlags::None || newFlags ! = OpClipSideFlags::None) { const Rect& opBounds = op->computedState.clippedBounds; float boundsDelta = mBounds.left - opBounds.left; if (! checkSide(currentFlags, newFlags, OpClipSideFlags::Left, boundsDelta)) return false; boundsDelta = mBounds.top - opBounds.top; if (! checkSide(currentFlags, newFlags, OpClipSideFlags::Top, boundsDelta)) return false; boundsDelta = opBounds.right - mBounds.right; if (! checkSide(currentFlags, newFlags, OpClipSideFlags::Right, boundsDelta)) return false; boundsDelta = opBounds.bottom - mBounds.bottom; if (! checkSide(currentFlags, newFlags, OpClipSideFlags::Bottom, boundsDelta)) return false; } const SkPaint* newPaint = op->op->paint; const SkPaint* oldPaint = mOps[0]->op->paint; if (newPaint == oldPaint) { return true; } else if (newPaint && ! oldPaint) { return paintIsDefault(*newPaint); } else if (! newPaint && oldPaint) { return paintIsDefault(*oldPaint); } return paintsAreEquivalent(*newPaint, *oldPaint); } void mergeOp(BakedOpState* op) { mBounds.unionWith(op->computedState.clippedBounds); mOps.push_back(op); mClipSideFlags |= op->computedState.clipSideFlags; } int getClipSideFlags() const { return mClipSideFlags; } const Rect& getClipRect() const { return mBounds; } private: int mClipSideFlags; };Copy the code

MergingOpBatch, as the name implies, attempts to combine multiple BakeDOPStates into a single operation batch. If you can merge into one operation, you can synchronize these operations.

The following points should be considered to determine whether a merger can be carried out:

  • 1. If the text is not drawn, or the text is drawn but there are shadows. If yes, return false.
  • 2. Transparency inconsistent, return false
  • 3. If the clipping areas are inconsistent, return false
  • 4. If the OpClipSideFlags bit is not empty, check whether the edge of the corresponding direction (top,left,right, and bottom) is within the clipping range of the current MergingOpBatch. If no, return false
  • 5. Check whether the paint SKPaint has the same transparency and color value, otherwise return false.

With that in mind, let’s go back and look at the draw method in OpenGLPipeline.

OpenGLPipeline draw

In this case, the render pipe refers to OpenGL’s render pipe. So let’s look at the draw method of OpenGLPipeline.

bool OpenGLPipeline::draw(const Frame& frame, const SkRect& screenDirty, const SkRect& dirty, const FrameBuilder::LightGeometry& lightGeometry, LayerUpdateQueue* layerUpdateQueue, const Rect& contentDrawBounds, bool opaque, bool wideColorGamut, const BakedOpRenderer::LightInfo& lightInfo, const std::vector<sp<RenderNode>>& renderNodes, FrameInfoVisualizer* profiler) { mEglManager.damageFrame(frame, dirty); bool drew = false; auto& caches = Caches::getInstance(); FrameBuilder frameBuilder(dirty, frame.width(), frame.height(), lightGeometry, caches); frameBuilder.deferLayers(*layerUpdateQueue); layerUpdateQueue->clear(); frameBuilder.deferRenderNodeScene(renderNodes, contentDrawBounds); BakedOpRenderer renderer(caches, mRenderThread.renderState(), opaque, wideColorGamut, lightInfo); frameBuilder.replayBakedOps<BakedOpDispatcher>(renderer); ProfileRenderer profileRenderer(renderer); profiler->draw(profileRenderer); drew = renderer.didDraw(); // post frame cleanup caches.clearGarbage(); caches.pathCache.trim(); caches.tessellationCache.trim(); . return drew; }Copy the code
  • 1. After building the FrameBuilder object, call the deferLayers method

  • 2. Create a BakedOpRenderer object and call replayBakedOps on the FrameBuilder to distribute the action via BakedOpDispatcher

  • 3. Call ProfileRenderer’s draw method

FrameBuilder deferLayers

void FrameBuilder::deferLayers(const LayerUpdateQueue& layers) { for (int i = layers.entries().size() - 1; i >= 0; i--) { RenderNode* layerNode = layers.entries()[i].renderNode.get(); OffscreenBuffer* layer = layerNode->getLayer(); if (CC_LIKELY(layer)) { Rect layerDamage = layers.entries()[i].damage; layerDamage.doIntersect(0, 0, layer->viewportWidth, layer->viewportHeight); layerNode->computeOrdering(); . saveForLayer(layerNode->getWidth(), layerNode->getHeight(), 0, 0, layerDamage, lightCenter, nullptr, layerNode); if (layerNode->getDisplayList()) { deferNodeOps(*layerNode); } restoreForLayer(); }}}Copy the code

Note that the LayerUpdateQueue here records the rendernodes that need to be updated, and the range and number of rendernodes that need to be refreshed have been recorded in the prepareTree step. The Entries method of LayerUpdateQueue is to get a mEntries object.

  • 1. Iterate over the RenderNode recorded in each mEntries, first intersecting the entire viewport size with the dirty part of the RenderNode to prevent RenderNode from being drawn off-screen.
  • 2. Calculate the projection sequence of renderNode using computeOrdering.
  • 3. SaveForLayer generates the LayerBuilder from RenderNode and BeginLayerOp, which is recorded in the mLayerBuilders collection of FrameBuilder. At the same time, mLayerStack records the current number of draws.
  • 4. Determine whether RenderNode has DisplayList. We know from the previous article that the DisplayList object controls all the child Rendernodes of the current RenderNode. So if there is a DisplayList, the RenderNode has child elements, and the child RenderNode is processed by calling deferNodeOps.
  • 5. RestoreForLayer pops up the number of mLayerStack draws.

FrameBuilder saveForLayer

void FrameBuilder::saveForLayer(uint32_t layerWidth, uint32_t layerHeight, float contentTranslateX,
                                float contentTranslateY, const Rect& repaintRect,
                                const Vector3& lightCenter, const BeginLayerOp* beginLayerOp,
                                RenderNode* renderNode) {
    mCanvasState.save(SaveFlags::MatrixClip);
    mCanvasState.writableSnapshot()->initializeViewport(layerWidth, layerHeight);
    mCanvasState.writableSnapshot()->roundRectClipState = nullptr;
    mCanvasState.writableSnapshot()->setRelativeLightCenter(lightCenter);
    mCanvasState.writableSnapshot()->transform->loadTranslate(contentTranslateX, contentTranslateY,
                                                              0);
    mCanvasState.writableSnapshot()->setClip(repaintRect.left, repaintRect.top, repaintRect.right,
                                             repaintRect.bottom);

    mLayerStack.push_back(mLayerBuilders.size());
    auto newFbo = mAllocator.create<LayerBuilder>(layerWidth, layerHeight, repaintRect,
                                                  beginLayerOp, renderNode);
    mLayerBuilders.push_back(newFbo);
}
Copy the code

You can see that each RenderNode generates a LayerBuilder object and stores it in mLayerBuilders. At the same time, mCanvasState records the current state of the draw property. MLayerStack also keeps track of the current size in mLayerBuilders. This will tell you how many sub-views there are at each View display level through the stack.

FrameBuilder deferNodeOps
#define OP_RECEIVER(Type) \ [](FrameBuilder& frameBuilder, const RecordedOp& op) { \ frameBuilder.defer##Type(static_cast<const Type&>(op)); The \}, void FrameBuilder::deferNodeOps(const RenderNode& renderNode) { typedef void (*OpDispatcher)(FrameBuilder & frameBuilder, const RecordedOp& op); static OpDispatcher receivers[] = BUILD_DEFERRABLE_OP_LUT(OP_RECEIVER); const DisplayList& displayList = *(renderNode.getDisplayList()); for (auto& chunk : displayList.getChunks()) { FatVector<ZRenderNodeOpPair, 16> zTranslatedNodes; buildZSortedChildList(&zTranslatedNodes, displayList, chunk); defer3dChildren(chunk.reorderClip, ChildrenSelectMode::Negative, zTranslatedNodes); for (size_t opIndex = chunk.beginOpIndex; opIndex < chunk.endOpIndex; opIndex++) { const RecordedOp* op = displayList.getOps()[opIndex]; receivers[op->opId](*this, *op); if (CC_UNLIKELY(! renderNode.mProjectedNodes.empty() && displayList.projectionReceiveIndex >= 0 && static_cast<int>(opIndex) == displayList.projectionReceiveIndex)) { deferProjectedChildren(renderNode); } } defer3dChildren(chunk.reorderClip, ChildrenSelectMode::Positive, zTranslatedNodes); }}Copy the code

In this method, chunks of the DisplayList are iterated over, which essentially records where the op level of the current RenderNode operation ends or ends. There are three steps:

  • 1. BuildZSortedChildList reorders child RenderNodes on the Z-axis based on chunks. Then call defer3dChildren (mode for ChildrenSelectMode: : Negative) processing has taken good z axis zTranslatedNodes list order.
  • 2. Obtain the start and end positions of chunk records for the current OP. Gets the location of the RecordedOp collection recorded in the current DisplayList as index. And call the method pointer at the corresponding position in the OpDispatcher array to perform the corresponding RecordedOp operation. This process actually converts the RecordedOp to the BakedOpState and saves it.
  • 3. Then call defer3dChildren, mode for ChildrenSelectMode: : Positive

Let’s look at buildZSortedChildList, defer3dChildren, and Receivers array to convert RecordedOp to BakeOpState.

FrameBuilder buildZSortedChildList
template <typename V> static void buildZSortedChildList(V* zTranslatedNodes, const DisplayList& displayList, const DisplayList::Chunk& chunk) { if (chunk.beginChildIndex == chunk.endChildIndex) return; for (size_t i = chunk.beginChildIndex; i < chunk.endChildIndex; i++) { RenderNodeOp* childOp = displayList.getChildren()[i]; RenderNode* child = childOp->renderNode; float childZ = child->properties().getZ(); if (! MathUtils::isZero(childZ) && chunk.reorderChildren) { zTranslatedNodes->push_back(ZRenderNodeOpPair(childZ, childOp)); childOp->skipInOrderDraw = true; } else if (! child->properties().getProjectBackwards()) { childOp->skipInOrderDraw = false; } } std::stable_sort(zTranslatedNodes->begin(), zTranslatedNodes->end()); }Copy the code

It is very simple to get the Z-axis coordinates of each RenderNode child and then rearrange all rendernodes stored in zTranslatedNodes using a stable sort.

FrameBuilder::defer3dChildren
template <typename V> static size_t findNonNegativeIndex(const V& zTranslatedNodes) { for (size_t i = 0; i < zTranslatedNodes.size(); I ++) {if (zTranslatedNodes[I].key >= 0.0f) return I; } return zTranslatedNodes.size(); } template <typename V> void FrameBuilder::defer3dChildren(const ClipBase* reorderClip, ChildrenSelectMode mode, const V& zTranslatedNodes) { const int size = zTranslatedNodes.size(); If (size = = 0 | | (mode = = ChildrenSelectMode: : Negative && zTranslatedNodes [0]. The key > 0.0 f) | | (mode = = ChildrenSelectMode: : Positive && zTranslatedNodes [size - 1]. The key < 0.0 f)) {return; } const size_t nonNegativeIndex = findNonNegativeIndex(zTranslatedNodes); size_t drawIndex, shadowIndex, endIndex; if (mode == ChildrenSelectMode::Negative) { drawIndex = 0; endIndex = nonNegativeIndex; shadowIndex = endIndex; // draw no shadows } else { drawIndex = nonNegativeIndex; endIndex = size; shadowIndex = drawIndex; "// potentially draw shadow for each pos Z child} float lastCasterZ = 0.0f; while (shadowIndex < endIndex || drawIndex < endIndex) { if (shadowIndex < endIndex) { const RenderNodeOp* casterNodeOp = zTranslatedNodes[shadowIndex].value; const float casterZ = zTranslatedNodes[shadowIndex].key; If (shadowIndex = = drawIndex | | casterZ - lastCasterZ < 0.1 f) {deferShadow (reorderClip, * casterNodeOp); lastCasterZ = casterZ; // must do this even if current caster not casting a shadow shadowIndex++; continue; } } const RenderNodeOp* childOp = zTranslatedNodes[drawIndex].value; deferRenderNodeOpImpl(*childOp); drawIndex++; }}Copy the code

FindNonNegativeIndex finds the first child RenderNode with a Z-axis, otherwise the total size of the child RenderNode, i.e., the unified Z-axis hierarchy. In this method there are two processes according to the ChildrenSelectMode:

  • For ChildrenSelectMode: : Negative, drawIndex is 0, endIndex to first z axis RenderNode and shadowIndex endIndex. So in the next loop, you don’t go to deferShadow to draw the shadow.
  • 2. As ChildrenSelectMode: : Positive, is the opposite. Draw as many shadows as possible on each level of the Z axis.

The child renderNode is then processed by calling deferRenderNodeOpImpl. Note ChildrenSelectMode: : Negative is to obtain RenderNode, 0 and ChildrenSelectMode: :, you get the first RenderNode with sequence of the z axis.

deferRenderNodeOpImpl
void FrameBuilder::deferRenderNodeOpImpl(const RenderNodeOp& op) { if (op.renderNode->nothingToDraw()) return; int count = mCanvasState.save(SaveFlags::MatrixClip); mCanvasState.writableSnapshot()->applyClip(op.localClip, *mCanvasState.currentSnapshot()->transform); mCanvasState.concatMatrix(op.localMatrix); deferNodePropsAndOps(*op.renderNode); mCanvasState.restoreToCount(count); } void FrameBuilder::deferNodePropsAndOps(RenderNode& node) { .... bool quickRejected = mCanvasState.currentSnapshot()->getRenderTargetClip().isEmpty() || (properties.getClipToBounds() &&  mCanvasState.quickRejectConservative(0, 0, width, height)); if (! quickRejected) { if (node.getLayer()) { // HW layer LayerOp* drawLayerOp = mAllocator.create_trivial<LayerOp>(node); BakedOpState* bakedOpState = tryBakeOpState(*drawLayerOp); if (bakedOpState) { currentLayer().deferUnmergeableOp(mAllocator, bakedOpState, OpBatchType::Bitmap); } } else if (CC_UNLIKELY(! saveLayerBounds.isEmpty())) { SkPaint saveLayerPaint; saveLayerPaint.setAlpha(properties.getAlpha()); deferBeginLayerOp(*mAllocator.create_trivial<BeginLayerOp>( saveLayerBounds, Matrix4::identity(), nullptr, // no record-time clip - need only respect defer-time one &saveLayerPaint)); deferNodeOps(node); deferEndLayerOp(*mAllocator.create_trivial<EndLayerOp>()); } else { deferNodeOps(node); }}}Copy the code

In fact, this can be divided into two sub-rendernode cases:

  • RenderNode is a View with content that overwrites onDraw (e.g. ImageView, textView). The OffscreenBuffer obtained by getLayer is not empty, that is, the LayerType of the RenderLayer is set. The RenderNode has content stored in the BakeOpState.
  • 2. If RenderNode is corresponding to ViewGroup, OffscreenBuffer must not exist. There are two processes. If RenderNode, the parent of the ViewGroup RenderNode, has a range of saveLayerBounds, Then call deferNodeOps to continue traversing the grandchild RenderNode. Finally, a new LayerOp is generated by returning the previously saved state to BakedOpState via deferEndLayerOp.
How OpDispatcher Receivers perform the RecordedOp

If it’s confusing, we can see that the array is defined like this:

    static OpDispatcher receivers[] = BUILD_DEFERRABLE_OP_LUT(OP_RECEIVER);
Copy the code

OP_RECEIVER is defined as follows:

#define OP_RECEIVER(Type) \ [](FrameBuilder& frameBuilder, const RecordedOp& op) { \ frameBuilder.defer##Type(static_cast<const Type&>(op)); The \},Copy the code

{frameBuilder.deferxxx}} {frameBuilder.deferxxx} {{frameBuilder.deferxxx}} For example, we use ColorOp, the color operation. The location of the array corresponds to enforce a frameBuilder. DeferColorOp. To traverse the corresponding index RecordedOp of the chunk records, perform the following method:

void FrameBuilder::deferColorOp(const ColorOp& op) { BakedOpState* bakedState = tryBakeUnboundedOpState(op); if (! bakedState) return; // quick rejected currentLayer().deferUnmergeableOp(mAllocator, bakedState, OpBatchType::Vertices); }Copy the code
 LayerBuilder& currentLayer() { return *(mLayerBuilders[mLayerStack.back()]); }

    BakedOpState* tryBakeUnboundedOpState(const RecordedOp& recordedOp) {
        return BakedOpState::tryConstructUnbounded(mAllocator, *mCanvasState.writableSnapshot(),
                                                   recordedOp);
    }
Copy the code

The LinearAllocation will first generate a BakedOpState object that holds the current corresponding draw operation RecordedOp and the snapshot region.

Then call the deferUnmergeableOp method of the LayerBuilder just set to mLayerBuilders.

void LayerBuilder::deferUnmergeableOp(LinearAllocator& allocator, BakedOpState* op, batchid_t batchId) { onDeferOp(allocator, op); OpBatch* targetBatch = mBatchLookup[batchId]; size_t insertBatchIndex = mBatches.size(); if (targetBatch) { locateInsertIndex(batchId, op->computedState.clippedBounds, (BatchBase**)(&targetBatch), &insertBatchIndex); } if (targetBatch) { targetBatch->batchOp(op); } else { targetBatch = allocator.create<OpBatch>(batchId, op); mBatchLookup[batchId] = targetBatch; mBatches.insert(mBatches.begin() + insertBatchIndex, targetBatch); }}Copy the code

The initial mBatchLookup corresponding to the OpBatch index is null pointer. Therefore, this goes to the targetBatch branch below, creates an OpBatch object, and inserts the corresponding type ID into mBatches. If an mBatches corresponding type ID can be found, the batchOp method is called to save the new BakedOpState to OpBatch.

Of course, let’s take a look at how many OpBatch types there are:

namespace OpBatchType {
enum {
    Bitmap,
    MergedPatch,
    AlphaVertices,
    Vertices,
    AlphaMaskTexture,
    Text,
    ColorText,
    Shadow,
    TextureLayer,
    Functor,
    CopyToLayer,
    CopyFromLayer,

    Count  // must be last
};
}
Copy the code

The FrameBuilder replayBakedOps

    template <typename StaticDispatcher, typename Renderer>
    void replayBakedOps(Renderer& renderer) {
        std::vector<OffscreenBuffer*> temporaryLayers;
        finishDefer();

#define X(Type)                                                                   \
    [](void* renderer, const BakedOpState& state) {                               \
        StaticDispatcher::on##Type(*(static_cast<Renderer*>(renderer)),           \
                                   static_cast<const Type&>(*(state.op)), state); \
    },
        static BakedOpReceiver unmergedReceivers[] = BUILD_RENDERABLE_OP_LUT(X);
#undef X


#define X(Type)                                                                           \
    [](void* renderer, const MergedBakedOpList& opList) {                                 \
        StaticDispatcher::onMerged##Type##s(*(static_cast<Renderer*>(renderer)), opList); \
    },
        static MergedOpReceiver mergedReceivers[] = BUILD_MERGEABLE_OP_LUT(X);
#undef X

        for (int i = mLayerBuilders.size() - 1; i >= 1; i--) {
            GL_CHECKPOINT(MODERATE);
            LayerBuilder& layer = *(mLayerBuilders[i]);
            if (layer.renderNode) {
                renderer.startRepaintLayer(layer.offscreenBuffer, layer.repaintRect);
                layer.replayBakedOpsImpl((void*)&renderer, unmergedReceivers, mergedReceivers);
                renderer.endLayer();
            } else if (!layer.empty()) {
...
            }
        }

        if (CC_LIKELY(mDrawFbo0)) {
            const LayerBuilder& fbo0 = *(mLayerBuilders[0]);
            renderer.startFrame(fbo0.width, fbo0.height, fbo0.repaintRect);
            fbo0.replayBakedOpsImpl((void*)&renderer, unmergedReceivers, mergedReceivers);
            renderer.endFrame(fbo0.repaintRect);
        }

        for (auto& temporaryLayer : temporaryLayers) {
            renderer.recycleTemporaryLayer(temporaryLayer);
        }
    }
Copy the code

First go through all the saved LayerBuilders in mLayerBuilders and execute the following methods:

  • 1. StartFrame method of Renderer
  • 2. LayerBuilder replayBakedOpsImpl
  • 3.Renderer’s endFrame method

If mDrawFbo0 is determined to be true, execute the three steps of LayerBuilder at position 0 of LayerBuilder again. This Render is the BakedOpRenderer.

Note that the replayBakedOpsImpl method sets two special objects:

  • 1.BakedOpReceiver
  • 2.MergedOpReceiver

These two parameters are passed as method Pointers to replayBakedOpsImpl. These two methods actually correspond to two macros each.

#define X(Type)                                                                   \
    [](void* renderer, const BakedOpState& state) {                               \
        StaticDispatcher::on##Type(*(static_cast<Renderer*>(renderer)),           \
                                   static_cast<const Type&>(*(state.op)), state); \
    },
        static BakedOpReceiver unmergedReceivers[] = BUILD_RENDERABLE_OP_LUT(X);
#undef X
Copy the code

Note that the StaticDispatcher is actually a generic object, it actually refers to

frameBuilder.replayBakedOps<BakedOpDispatcher>(renderer);
Copy the code

BakedOpDispatcher.

If the type currently passed in by X is BitmapOp, then what is actually saved is:

BakedOpDispatcher::onBitmapOp
Copy the code

This method.

In the same way, for mergedReceivers, if BitmapOp:

BakedOpDispatcher::onMergedBitmapOps
Copy the code

This allows you to perform one operation for each Type. The corresponding merge operation must be the same drawing operation and the same transparency to complete the merge.

Let’s take a look at what each of these methods does in turn.

BakedOpRenderer startFrame
void BakedOpRenderer::startFrame(uint32_t width, uint32_t height, const Rect& repaintRect) {
    mRenderState.bindFramebuffer(0);
    setViewport(width, height);

    if (!mOpaque) {
        clearColorBuffer(repaintRect);
    }

}
Copy the code
void RenderState::bindFramebuffer(GLuint fbo) {
    if (mFramebuffer != fbo) {
        mFramebuffer = fbo;
        glBindFramebuffer(GL_FRAMEBUFFER, mFramebuffer);
    }
}
Copy the code

The simple thing is to actually bind the Framebuffer with index 0. The frame 0 buffer is the default screen frame buffer. It then sets the width and height of the attempt and clears the color value of the currently drawn area if it is not transparent.

LayerBuilder replayBakedOpsImpl
void LayerBuilder::replayBakedOpsImpl(void* arg, BakedOpReceiver* unmergedReceivers, MergedOpReceiver* mergedReceivers) const { for (const BatchBase* batch : mBatches) { size_t size = batch->getOps().size(); if (size > 1 && batch->isMerging()) { int opId = batch->getOps()[0]->op->opId; const MergingOpBatch* mergingBatch = static_cast<const MergingOpBatch*>(batch); MergedBakedOpList data = {batch->getOps().data(), size, mergingBatch->getClipSideFlags(), mergingBatch->getClipRect()}; mergedReceivers[opId](arg, data); } else { for (const BakedOpState* op : batch->getOps()) { unmergedReceivers[op->op->opId](arg, *op); }}}}Copy the code

Previously, all OpBatches were saved to mBatches using the deferUnmergeableOp operation. At this point, replayBakedOpsImpl starts to iterate over all opBatches.

  • 1. If the size of the BakeOpState list stored in OpBatch is greater than 1, the BakeOpState list can be merged. This is a MergingOpBatch object. After Mr MergedBakedOpList object, in the call mergedReceivers corresponding index BakedOpDispatcher: : onMerged# # Type# # s method.
  • 2. If the mergedReceivers cannot be merged, the call method is BakedOpDispatcher::on##Type for each BakedOpState in the OpBatch.

Let’s take an example of a Bitmap operation.

Operations on bitmaps
void BakedOpDispatcher::onBitmapOp(BakedOpRenderer& renderer, const BitmapOp& op, const BakedOpState& state) { Texture* texture = renderer.getTexture(op.bitmap); if (! texture) return; const AutoTexture autoCleanup(texture); const int textureFillFlags = (op.bitmap->colorType() == kAlpha_8_SkColorType) ? TextureFillFlags::IsAlphaMaskTexture : TextureFillFlags::None; Glop glop; GlopBuilder(renderer.renderState(), renderer.caches(), &glop) .setRoundRectClipState(state.roundRectClipState) .setMeshTexturedUnitQuad(texture->uvMapper) .setFillTexturePaint(*texture, textureFillFlags, op.paint, state.alpha) .setTransform(state.computedState.transform, TransformFlags::None) .setModelViewMapUnitToRectSnap(Rect(texture->width(), texture->height())) .build(); renderer.renderGlop(state, glop); }Copy the code

You can see that this process is actually taking the texture from the Bitmap and building a GlopBuilder with other parameters to generate a Glop object.

Finally, Renderer’s renderGlop is called for rendering

void BakedOpDispatcher::onMergedBitmapOps(BakedOpRenderer& renderer, const MergedBakedOpList& opList) { const BakedOpState& firstState = *(opList.states[0]); Bitmap* bitmap = (static_cast<const BitmapOp*>(opList.states[0]->op))->bitmap; Texture* texture = renderer.caches().textureCache.get(bitmap); if (! texture) return; const AutoTexture autoCleanup(texture); TextureVertex vertices[opList.count * 4]; for (size_t i = 0; i < opList.count; i++) { const BakedOpState& state = *(opList.states[i]); TextureVertex* rectVerts = &vertices[i * 4]; Rect opBounds = state.op->unmappedBounds; state.computedState.transform.mapRect(opBounds); if (CC_LIKELY(state.computedState.transform.isPureTranslate())) { opBounds.snapToPixelBoundaries(); } storeTexturedRect(rectVerts, opBounds); renderer.dirtyRenderTarget(opBounds); } const int textureFillFlags = (bitmap->colorType() == kAlpha_8_SkColorType) ? TextureFillFlags::IsAlphaMaskTexture : TextureFillFlags::None; Glop glop; GlopBuilder(renderer.renderState(), renderer.caches(), &glop) .setRoundRectClipState(firstState.roundRectClipState) .setMeshTexturedIndexedQuads(vertices, opList.count * 6) .setFillTexturePaint(*texture, textureFillFlags, firstState.op->paint, firstState.alpha) .setTransform(Matrix4::identity(), TransformFlags::None) .setModelViewIdentityEmptyBounds() .build(); ClipRect renderTargetClip(opList.clip); const ClipBase* clip = opList.clipSideFlags ? &renderTargetClip : nullptr; renderer.renderGlop(nullptr, clip, glop); }Copy the code

For merge operations, the area of each draw operation may be different because you want to merge multiple operations together. Therefore, you need to calculate the maximum area of each operation and continuously record the dirty area of each operation into the mRenderTarget structure.

Finally, renderGlop is called. The difference is that the ClipRect is also passed to the BakedOpRenderer.

BakedOpRenderer renderGlop
    void renderGlop(const BakedOpState& state, const Glop& glop) {
        renderGlop(&state.computedState.clippedBounds, state.computedState.getClipIfNeeded(), glop);
    }

    void renderGlop(const Rect* dirtyBounds, const ClipBase* clip, const Glop& glop) {
        mGlopReceiver(*this, dirtyBounds, clip, glop);
    }
Copy the code

You can see that the function pointer mGlopReceiver is called at the end. What is this mGlopReceiver actually? Let’s look at the constructor for BakedOpRenderer:

    BakedOpRenderer(Caches& caches, RenderState& renderState, bool opaque, bool wideColorGamut,
                    const LightInfo& lightInfo)
            : mGlopReceiver(DefaultGlopReceiver)
            , mRenderState(renderState)
            , mCaches(caches)
            , mOpaque(opaque)
            , mWideColorGamut(wideColorGamut)
            , mLightInfo(lightInfo) {}

    static void DefaultGlopReceiver(BakedOpRenderer& renderer, const Rect* dirtyBounds,
                                    const ClipBase* clip, const Glop& glop) {
        renderer.renderGlopImpl(dirtyBounds, clip, glop);
    }
Copy the code

It actually calls the renderGlopImpl method of BakedOpRenderer.

void BakedOpRenderer::renderGlopImpl(const Rect* dirtyBounds, const ClipBase* clip, const Glop& glop) { prepareRender(dirtyBounds, clip); bool overrideDisableBlending = ! mHasDrawn && mOpaque && ! mRenderTarget.frameBufferId && glop.blend.src == GL_ONE && glop.blend.dst == GL_ONE_MINUS_SRC_ALPHA; mRenderState.render(glop, mRenderTarget.orthoMatrix, overrideDisableBlending); if (! mRenderTarget.frameBufferId) mHasDrawn = true; }Copy the code

Do two things is to:

  • 1. The prepareRender opens the cropping area, templates test, and binds color attachments to the frame buffer
  • 2. Call the render method of RenderState.
prepareRender
void BakedOpRenderer::prepareRender(const Rect* dirtyBounds, const ClipBase* clip) { mRenderState.scissor().setEnabled(clip ! = nullptr); if (clip) { mRenderState.scissor().set(mRenderTarget.viewportHeight, clip->rect); } if (CC_LIKELY(! Properties::debugOverdraw)) { if (CC_UNLIKELY(clip && clip->mode ! = ClipMode::Rectangle)) { if (mRenderTarget.lastStencilClip ! = clip) { mRenderTarget.lastStencilClip = clip; if (mRenderTarget.frameBufferId ! = 0 &&! mRenderTarget.stencil) { OffscreenBuffer* layer = mRenderTarget.offscreenBuffer; mRenderTarget.stencil = mCaches.renderBufferCache.get( Stencil::getLayerStencilFormat(), layer->texture.width(), layer->texture.height()); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, mRenderTarget.stencil->getName()); } if (clip->mode == ClipMode::RectangleList) { setupStencilRectList(clip); } else { setupStencilRegion(clip); } } else { int incrementThreshold = 0; if (CC_LIKELY(clip->mode == ClipMode::RectangleList)) { auto&& rectList = reinterpret_cast<const ClipRectList*>(clip)->rectList; incrementThreshold = rectList.getTransformedRectanglesCount(); } mRenderState.stencil().enableTest(incrementThreshold); } } else { mRenderState.stencil().disable(); } } if (dirtyBounds) { dirtyRenderTarget(*dirtyBounds); }}Copy the code

You can see that the logic here is really all about testing the OpenGL template to see if it’s open, and turning on clipping if there’s a clipping area. Extracted from one of the most important thing is that the Caches a wide high consistent render cache object, through glFramebufferRenderbuffer bound to the frame buffer.

Render method of RenderState

The following code is very long, I only excerpted the core, in fact, are I talked with you before OpenGL ES routine operation

void RenderState::render(const Glop& glop, const Matrix4& orthoMatrix, bool overrideDisableBlending) { const Glop::Mesh& mesh = glop.mesh; const Glop::Mesh::Vertices& vertices = mesh.vertices; const Glop::Mesh::Indices& indices = mesh.indices; const Glop::Fill& fill = glop.fill; GL_CHECKPOINT(MODERATE); // --------------------------------------------- // ---------- Program + uniform setup ---------- // --------------------------------------------- mCaches->setProgram(fill.program); if (fill.colorEnabled) { fill.program->setColor(fill.color); } fill.program->set(orthoMatrix, glop.transform.modelView, glop.transform.meshTransform(), glop.transform.transformFlags & TransformFlags::OffsetByFudgeFactor); . // -------------------------------- // ---------- Mesh setup ---------- // -------------------------------- // vertices meshState().bindMeshBuffer(vertices.bufferObject); meshState().bindPositionVertexPointer(vertices.position, vertices.stride); // indices meshState().bindIndicesBuffer(indices.bufferObject); . // ------------------------------------ // ---------- GL state setup ---------- // ------------------------------------ . // ------------------------------------ // ---------- Actual drawing ---------- // ------------------------------------ if (indices.bufferObject == meshState().getQuadListIBO()) { GLsizei elementsCount = mesh.elementCount; const GLbyte* vertexData = static_cast<const GLbyte*>(vertices.position); while (elementsCount > 0) { GLsizei drawCount = std::min(elementsCount, (GLsizei)kMaxNumberOfQuads * 6); GLsizei vertexCount = (drawCount / 6) * 4; meshState().bindPositionVertexPointer(vertexData, vertices.stride); if (vertices.attribFlags & VertexAttribFlags::TextureCoord) { meshState().bindTexCoordsVertexPointer(vertexData + kMeshTextureOffset, vertices.stride); } if (mCaches->extensions().getMajorGlVersion() >= 3) { glDrawRangeElements(mesh.primitiveMode, 0, vertexCount - 1, drawCount, GL_UNSIGNED_SHORT, nullptr); } else { glDrawElements(mesh.primitiveMode, drawCount, GL_UNSIGNED_SHORT, nullptr); } elementsCount -= drawCount; vertexData += vertexCount * vertices.stride; } } else if (indices.bufferObject || indices.indices) { if (mCaches->extensions().getMajorGlVersion() >= 3) { glDrawRangeElements(mesh.primitiveMode, 0, mesh.vertexCount - 1, mesh.elementCount, GL_UNSIGNED_SHORT, indices.indices); } else { glDrawElements(mesh.primitiveMode, mesh.elementCount, GL_UNSIGNED_SHORT, indices.indices); } } else { glDrawArrays(mesh.primitiveMode, 0, mesh.elementCount); }... // ----------------------------------- // ---------- Mesh teardown ---------- // ----------------------------------- if (vertices.attribFlags & VertexAttribFlags::Alpha) { glDisableVertexAttribArray(alphaLocation); } if (vertices.attribFlags & VertexAttribFlags::Color) { glDisableVertexAttribArray(colorLocation); } GL_CHECKPOINT(MODERATE); }Copy the code

Are routine operation, the first execution in Glop initialization good GLProgram object, and then deal with vbo and vao object, and bindPositionVertexPointer tell OpenGL es should know how to operate and vao vbo. Finally, the glElementDraw or glDrawArrays method is executed according to the condition, and the glDrawArrays drawn by index is still the default.

Finally, swapBuffers of the render pipeline are used to send the memory in the Surface to the SF process for processing. The last callback is the FrameCompleteCallback set in the View wrootimpl, also known as the pendingDrawFinished method.

ViewRootImpl pendingDrawFinished

void pendingDrawFinished() { if (mDrawsNeededToReport == 0) { throw new RuntimeException("Unbalanced drawPending/pendingDrawFinished calls"); } mDrawsNeededToReport--; if (mDrawsNeededToReport == 0) { reportDrawFinished(); }}Copy the code

Finally, as in the previous onDraw summary, notify THE WMS that the draw process has been executed, allowing the next Form Traversals to measure the form with relayoutWindow.

This is where the hardware renders once and the rendering is done.

Author: yjy239 links: www.jianshu.com/p/4854d9fcc… The copyright of the book belongs to the author. Commercial reprint please contact the author for authorization, non-commercial reprint please indicate the source.