The body of the

A preliminary look at the BP file

To understand how to start SurfaceFlinger you need to look at the BP file in the SurfaceFlinger module directory.

In fact, bp will eventually be converted to niJA file for compilation, so it is similar to GN file to some extent. If we take a look at this BP file, we can get a general idea of what modules SurfaceFlinger involves. Only the important modules are listed here.

. cc_defaults { name: "libsurfaceflinger_defaults", defaults: ["surfaceflinger_defaults"], cflags: [ "-DGL_GLEXT_PROTOTYPES", "-DEGL_EGLEXT_PROTOTYPES", ], shared_libs: [" [email protected] "and" android. Hardware. Configstore - utils ", "[email protected]." "[email protected] [email protected]", "", "[email protected]", "[email protected] [email protected]", "", "libbase", "libbinder", "libbufferhubqueue", "libcutils", "libdl", "libEGL", "libfmq", "libGLESv1_CM", "libGLESv2", "libgui", "libhardware", "libhidlbase", "libhidltransport", "libhwbinder", "liblayers_proto", "liblog", "libpdx_default_transport", "libprotobuf-cpp-lite", "libsync", "libtimestats_proto", "libui", "libutils", "libvulkan", ], static_libs: [ "libserviceutils", "libtrace_proto", "libvkjson", "libvr_manager", "libvrflinger", ], header_libs: [" [email protected] [email protected] ", ", "]. export_static_lib_headers: [ "libserviceutils", ], export_shared_lib_headers: [" [email protected] [email protected] ", "", "[email protected]", "libhidlbase libhidltransport", ""," libhwbinder, "], } cc_library_headers { .... } filegroup {name: "libsurfaceflinger_sources", SRCS: [...// CPP source code],} cc_library_shared {name: "libsurfaceflinger_sources", SRCS: [...// CPP source code],} cc_library_shared {name: "libsurfaceflinger", defaults: ["libsurfaceflinger_defaults"], ... } cc_binary { name: "surfaceflinger", defaults: ["surfaceflinger_defaults"], init_rc: ["surfaceflinger.rc"], srcs: ["main_surfaceflinger.cpp"], whole_static_libs: [ "libsigchain", ], shared_libs: [" [email protected] "and" android. Hardware. Configstore - utils ", "[email protected]." "[email protected]", "libbinder libcutils", ""," libdisplayservicehidl ", "libhidlbase", "libhidltransport", "liblayers_proto", "liblog", "libsurfaceflinger", "libtimestats_proto", "libutils", ], static_libs: [ "libserviceutils", "libtrace_proto", ], ldflags: ["-Wl,--export-dynamic"], ... }...Copy the code

You can see that SurfaceFlinger imports the following core things:

  • 1. [email protected] primitive generator the realization of the hardware abstraction layer
    1. [email protected] HWC layer synthetic hardware abstraction layer
  • 3. Binder, Opengles, HWBinder (Abstract hardware layer binder), etc.
  • 4. Set SurfaceFlinger init.rc file that needs to be loaded in the initial Android startup stage: SurfaceFlinger.
  • 5.SurfaceFlinger main function entry main_surfaceFlinger. CPP

surfaceflinger.rc

service surfaceflinger /system/bin/surfaceflinger
    class core animation
    user system
    group graphics drmrpc readproc
    onrestart restart zygote
    writepid /dev/stune/foreground/tasks
    socket pdx/system/vr/display/client     stream 0666 system graphics u:object_r:pdx_display_client_endpoint_socket:s0
    socket pdx/system/vr/display/manager    stream 0666 system graphics u:object_r:pdx_display_manager_endpoint_socket:s0
    socket pdx/system/vr/display/vsync      stream 0666 system graphics u:object_r:pdx_display_vsync_endpoint_socket:s0
Copy the code

In addition to opening surfaceFlinger, three sockets are also started. These sockets are automatically created when the init process resolves the init.rc service and are started via the VR module.

SurfaceFlinger Startup entry main_surfaceFlinger

int main(int, char**) { signal(SIGPIPE, SIG_IGN); hardware::configureRpcThreadpool(1 /* maxThreads */, false /* callerWillJoin */); startGraphicsAllocatorService(); // When SF is launched in its own process, limit the number of // binder threads to 4. ProcessState::self()->setThreadPoolMaxThreadCount(4); // start the thread pool sp<ProcessState> ps(ProcessState::self()); ps->startThreadPool(); // instantiate surfaceflinger sp<SurfaceFlinger> flinger = new SurfaceFlinger(); setpriority(PRIO_PROCESS, 0, PRIORITY_URGENT_DISPLAY); set_sched_policy(0, SP_FOREGROUND); // Put most SurfaceFlinger threads in the system-background cpuset // Keeps us from unnecessarily using big cores // Do this after the binder thread pool init if (cpusets_enabled()) set_cpuset_policy(0, SP_SYSTEM); // initialize before clients can connect flinger->init(); // publish surface flinger sp<IServiceManager> sm(defaultServiceManager()); sm->addService(String16(SurfaceFlinger::getServiceName()), flinger, false, IServiceManager::DUMP_FLAG_PRIORITY_CRITICAL); // publish GpuService sp<GpuService> gpuservice = new GpuService(); sm->addService(String16(GpuService::SERVICE_NAME), gpuservice, false); startDisplayService(); // dependency on SF getting registered above struct sched_param param = {0}; param.sched_priority = 2; if (sched_setscheduler(0, SCHED_FIFO, &param) ! = 0) { ALOGE("Couldn't set SCHED_FIFO"); } // run surface flinger in this thread flinger->run(); return 0; }Copy the code

There are several core methods you can see here:

    1. StartGraphicsAllocatorService initialization Hal layers of primitive generator service
  • 2. Initialize ProcessState, which maps the process to Binder drivers
    1. SurfaceFlinger instantiation
    1. Set_sched_policy is set to the foreground process
  • 5.SurfaceFlinger calls init
  • 6. Because SurfaceFlinger is also a Binder service by nature, it is added to the ServiceManager process.
  • 7. Initialize the GpuService and add it to the ServiceManager process
  • 8. To start the DisplayService
    1. Sched_setscheduler sets the process scheduling mode to FIFO for real-time processes
  • 10. Call SurfaceFlinger’s run method.

Three points are worth noting:

  • 1. Initialize the GraphicsAllocator and Display services
  • 2. Set_sched_policy and sched_setScheduler set processes
  • SurfaceFlinger init and run methods

Hal hardware abstraction layer will be introduced in the next chapter. Let’s focus on the second SurfaceFlinger process strategy and the third SF initialization.

SurfaceFlinger process scheduling policy

SurfaceFlinger is one of the core Android processes, but it doesn’t show up in the foreground like an App. It runs in the background. So how do you ensure that SurfaceFlinger doesn’t get killed? And at the same time ensure its priority, so that the CPU continues to preferentially allocate resources to SF? Let SF constantly grab the opportunity to finish rendering tasks in 16ms?

Here we need to introduce the Linux kernel process management.

Introduction to Process scheduling in Linux

Linux divides processes into two broad categories:

  • 1. Real-time processes are those that need to be executed as quickly as possible
  • 2. Common processes refer to most different processes

In these two processes, there are two main types of process scheduling policies:

  • Real-time scheduling policies: SCHED_FIFO, SCHED_RR, SCHED_DEADLINE. SCHED_FIFO first-in, first-out policy that ranks processes by priority; SCHED_RR alternates scheduling policies that repeatedly add tasks to the end of the queue in order of completion and give them to the head of the queue; SCHED_DEADLINE follows the deadline policy for a task.
  • SCHED_NORMAL, SCHED_BATCH, SCHED_IDLE SCHED_NORMAL, SCHED_BATCH background process, and SCHED_IDLE run SCHED_NORMAL only when they are particularly idle.

All of these policies are handled by the task_struct property:

const struct sched_class *sched_class;
Copy the code

This class is classified as follows:

  • 1. Stop_sched_class policy with the highest priority interrupts other processes
  • 2. Dl_sched_class deadline strategy
  • 3. Rt_sched_class based on RT algorithms or FIFO policies
  • 4. Fair_sched_class based on the common process scheduling policies of the fair algorithm CFS
  • 5. Scheduling policies for idLE_SCHED_class idle processes

CFS: Based on the vRuntime of each common process, the fairness algorithm finds the minimum vRuntime to remove from the red black tree, and finally put back into the red black tree. The algorithm is as follows:

Virtual runtime vruntime += Actual runtime delta_exec * NICE_0_LOAD/ Weight In this way, based on the actual runtime, the weight assigned to the short runtime is higher than the weight assigned to the long runtime.

Each process with different scheduling policies will be mounted to a different schedule policy category queue. When the __scheme method is called, pick_next_task is called, and the linked list of the entire scheduling policy is traversed in order as shown in the following figure:

Once we have this basic knowledge, we can easily understand the whole SF. SF is set to SP_FOREGROUND, set to foreground process, and added to foreground process group. The SCHED_FIFO priority policy is followed. In this way, SF can be guaranteed to run the process at a higher priority, and every time the process scheduling class is traversed, SF will be assigned first, and then to our App.

Initialization of SurfaceFlinger

Before looking at process initialization, let’s take a look at SurfaceFlinger’s UML diagram.

It can be seen that the basic relationship of the whole SF system is still complicated. Let’s focus on the two most critical categories:

  • 1. Binder callbacks for processes other than SF used by ISurfaceComposer are used to operate callbacks, of which I selected the more important ones to put in the UML diagram, bearing in mind that they will be encountered later in the analysis.
  • HWC2::ComposerCallback HWC2::ComposerCallback HWC2::ComposerCallback HWC2::ComposerCallback HWC2::ComposerCallback HWC2::ComposerCallback
  • 1. The Hotplug notifies the upper-layer SF process
  • Refresh The underlying hardware notifies the upper-layer SF process of HWC Refresh
  • 3.Vsync notifies the upper-layer SF process of the arrival of a synchronization signal

With these two large callback mechanisms, SF can communicate with the hardware layer and App processes, respectively, thus forming the initial bridge for SF.

With that initial impression, let’s take a closer look at the SF constructor:

SurfaceFlinger::SurfaceFlinger(SurfaceFlinger::SkipInitializationTag)
      : BnSurfaceComposer(),
        mTransactionFlags(0),
        mTransactionPending(false),
        mAnimTransactionPending(false),
        mLayersRemoved(false),
        mLayersAdded(false),
        mRepaintEverything(0),
        mBootTime(systemTime()),
        mBuiltinDisplays(),
        mVisibleRegionsDirty(false),
        mGeometryInvalid(false),
        mAnimCompositionPending(false),
        mDebugRegion(0),
        mDebugDDMS(0),
        mDebugDisableHWC(0),
        mDebugDisableTransformHint(0),
        mDebugInSwapBuffers(0),
        mLastSwapBufferTime(0),
        mDebugInTransaction(0),
        mLastTransactionTime(0),
        mBootFinished(false),
        mForceFullDamage(false),
        mPrimaryDispSync("PrimaryDispSync"),
        mPrimaryHWVsyncEnabled(false),
        mHWVsyncAvailable(false),
        mHasPoweredOff(false),
        mNumLayers(0),
        mVrFlingerRequestsDisplay(false),
        mMainThreadId(std::this_thread::get_id()),
        mCreateBufferQueue(&BufferQueue::createBufferQueue),
        mCreateNativeWindowSurface(&impl::NativeWindowSurface::create) 
Copy the code

Here we focus on the collection comparison core initial object:

  • 1.BnSurfaceComposer SurfaceFlinger’s parent class
  • 2. MPrimaryDispSync Main signal synchronization processor
  • 3.BufferQueue Specifies the consumption queue
SurfaceFlinger::SurfaceFlinger() : SurfaceFlinger(SkipInitialization) { vsyncPhaseOffsetNs = getInt64< ISurfaceFlingerConfigs, &ISurfaceFlingerConfigs::vsyncEventPhaseOffsetNs>(1000000); sfVsyncPhaseOffsetNs = getInt64< ISurfaceFlingerConfigs, &ISurfaceFlingerConfigs::vsyncSfEventPhaseOffsetNs>(1000000); hasSyncFramework = getBool< ISurfaceFlingerConfigs, &ISurfaceFlingerConfigs::hasSyncFramework>(true); dispSyncPresentTimeOffset = getInt64< ISurfaceFlingerConfigs, &ISurfaceFlingerConfigs::presentTimeOffsetFromVSyncNs>(0);  useHwcForRgbToYuv = getBool< ISurfaceFlingerConfigs, &ISurfaceFlingerConfigs::useHwcForRGBtoYUV>(false); maxVirtualDisplaySize = getUInt64<ISurfaceFlingerConfigs, &ISurfaceFlingerConfigs::maxVirtualDisplaySize>(0); // Vr flinger is only enabled on Daydream ready devices. useVrFlinger = getBool< ISurfaceFlingerConfigs, &ISurfaceFlingerConfigs::useVrFlinger>(false); maxFrameBufferAcquiredBuffers = getInt64< ISurfaceFlingerConfigs, &ISurfaceFlingerConfigs::maxFrameBufferAcquiredBuffers>(2); hasWideColorDisplay = getBool<ISurfaceFlingerConfigs, &ISurfaceFlingerConfigs::hasWideColorDisplay>(false); V1_1::DisplayOrientation primaryDisplayOrientation = getDisplayOrientation< V1_1::ISurfaceFlingerConfigs, &V1_1::ISurfaceFlingerConfigs::primaryDisplayOrientation>( V1_1::DisplayOrientation::ORIENTATION_0); switch (primaryDisplayOrientation) { case V1_1::DisplayOrientation::ORIENTATION_90: mPrimaryDisplayOrientation = DisplayState::eOrientation90; break; case V1_1::DisplayOrientation::ORIENTATION_180: mPrimaryDisplayOrientation = DisplayState::eOrientation180; break; case V1_1::DisplayOrientation::ORIENTATION_270: mPrimaryDisplayOrientation = DisplayState::eOrientation270; break; default: mPrimaryDisplayOrientation = DisplayState::eOrientationDefault; break; }... mPrimaryDispSync.init(SurfaceFlinger::hasSyncFramework, SurfaceFlinger::dispSyncPresentTimeOffset); // debugging stuff... char value[PROPERTY_VALUE_MAX]; . property_get("debug.sf.enable_hwc_vds", value, "0"); mUseHwcVirtualDisplays = atoi(value); property_get("ro.sf.disable_triple_buffer", value, "1"); mLayerTripleBufferingDisabled = atoi(value); const size_t defaultListSize = MAX_LAYERS; auto listSize = property_get_int32("debug.sf.max_igbp_list_size", int32_t(defaultListSize)); mMaxGraphicBufferProducerListSize = (listSize > 0) ? size_t(listSize) : defaultListSize; property_get("debug.sf.early_phase_offset_ns", value, "0"); const int earlyWakeupOffsetOffsetNs = atoi(value); mVsyncModulator.setPhaseOffsets(sfVsyncPhaseOffsetNs - earlyWakeupOffsetOffsetNs, sfVsyncPhaseOffsetNs); . }Copy the code

The SF constructor initialization does several things:

  • 1. VsyncPhaseOffsetNs and sfVsyncPhaseOffsetNs are initialized, which respectively refer to the phase difference of APP and SF. The basic concept of phase difference was discussed in section 1 and will be discussed in a special section
  • 2. Set the rendering direction of SF, which Angle it is.
    1. MPrimaryDispSync Initializes the signal synchronizer on the main display screen
  • 4. Determine whether to enable triple buffering and HWC synthesis based on the global configuration of Android

PrimaryDispSync init

void DispSync::init(bool hasSyncFramework, int64_t dispSyncPresentTimeOffset) { mIgnorePresentFences = ! hasSyncFramework; mPresentTimeOffset = dispSyncPresentTimeOffset; mThread->run("DispSync", PRIORITY_URGENT_DISPLAY + PRIORITY_MORE_FAVORABLE); // set DispSync to SCHED_FIFO to minimize jitter struct sched_param param = {0}; param.sched_priority = 2; if (sched_setscheduler(mThread->getTid(), SCHED_FIFO, &param) ! = 0) { ALOGE("Couldn't set SCHED_FIFO for DispSyncThread"); } reset(); beginResync(); if (kTraceDetailedInfo) { if (! mIgnorePresentFences && kEnableZeroPhaseTracer) { mZeroPhaseTracer = std::make_unique<ZeroPhaseTracer>(); addEventListener("ZeroPhaseTracer", 0, mZeroPhaseTracer.get()); }}}Copy the code

In this process, we initialize the DispSyncThread thread and get it up and running, and initialize some simple data. Also set the priority class for this thread scheduling to FIFO. Note that the Linux kernel does not care about the difference between Thread and Process. For the kernel, both are task_struct tasks. The only difference is that when Thread is initialized, the Clone system call is called and the parent task is pointed to the Process. And points to all stacks in the process to share data.

This ensures that the phase calculation has a very high limit. We just have an overview of the class.

SurfaceFlinger onFirstRef

You can also see from the main function that SF is actually a smart pointer. Sp strong reference pointer. The onFirstRef method is called when the pointer is initialized to the constructor of the type, further instantiating the object needed internally.

void SurfaceFlinger::onFirstRef()
{
    mEventQueue->init(this);
}
Copy the code

This method calls the init method of mEventQueue. This object is the following thread-safe MessageQueue object.

mutable std::unique_ptr<MessageQueue> mEventQueue{std::make_unique<impl::MessageQueue>()};
Copy the code

The MessageQueue in SF is actually very similar to the MessageQueue design developed in the Android application layer, but some roles do slightly different things.

Role of SurfaceFlinger’s MessageQueue mechanism:

  • 1.MessageQueue also exposes the operation interface as a Message queue. Unlike MessageQueue in the application layer, MessageQueue is not used as the queue cache of Message linked list, but provides the corresponding Message sending interface and Message waiting method.
  • 2. Native Looper is the real core of the whole MessageQueue, which builds a fast message callback mechanism with epoll_event as the core and event_FD as the auxiliary.
  • 3. The native Handler implements the handleMessage method. When Looper calls back, the handleMessage method in the Handler will be called to handle the callback function.

In order to deepen the impression, I put out the corresponding relationship between Android application layer MessageQueue and Looper to compare with the MessageQueue design in SF.

MessageQueue init

void MessageQueue::init(const sp<SurfaceFlinger>& flinger) {
    mFlinger = flinger;
    mLooper = new Looper(true);
    mHandler = new Handler(*this);
}
Copy the code

You can see that Looper and Handler are instantiated in MessageQueue. Looper is exactly the same as Looper, but the key is to look at its callback function:

void MessageQueue::Handler::handleMessage(const Message& message) { switch (message.what) { case INVALIDATE: android_atomic_and(~eventMaskInvalidate, &mEventMask); mQueue.mFlinger->onMessageReceived(message.what); break; case REFRESH: android_atomic_and(~eventMaskRefresh, &mEventMask); mQueue.mFlinger->onMessageReceived(message.what); break; }}Copy the code

You can see that there are two different types of primitive refresh listeners registered, one is the invalidate partial refresh, and the other is the refresh refresh. They all end up calling back to SF’s onMessageReceived. In other words, whenever we need an pixel refresh, the data is asynchronously loaded into the Handler for the refresh via the Post method of mEventQueue.

With that in mind, enough for now, let’s move on to the SF init method in the main method.

SurfaceFlinger init

void SurfaceFlinger::init() { Mutex::Autolock _l(mStateLock); // start the EventThread mEventThreadSource = std::make_unique<DispSyncSource>(&mPrimaryDispSync, SurfaceFlinger::vsyncPhaseOffsetNs, true, "app"); mEventThread = std::make_unique<impl::EventThread>(mEventThreadSource.get(), [this]() { resyncWithRateLimit(); }, impl::EventThread::InterceptVSyncsCallback(), "appEventThread"); mSfEventThreadSource = std::make_unique<DispSyncSource>(&mPrimaryDispSync, SurfaceFlinger::sfVsyncPhaseOffsetNs, true, "sf"); mSFEventThread = std::make_unique<impl::EventThread>(mSfEventThreadSource.get(), [this]() { resyncWithRateLimit(); }, [this](nsecs_t timestamp) { mInterceptor->saveVSyncEvent(timestamp); }, "sfEventThread"); mEventQueue->setEventThread(mSFEventThread.get()); mVsyncModulator.setEventThread(mSFEventThread.get()); // Get a RenderEngine for the given display / config (can't fail) getBE().mRenderEngine = RE::impl::RenderEngine::create(HAL_PIXEL_FORMAT_RGBA_8888, hasWideColorDisplay ? RE::RenderEngine::WIDE_COLOR_SUPPORT : 0); getBE().mHwc.reset( new HWComposer(std::make_unique<Hwc2::impl::Composer>(getBE().mHwcServiceName))); getBE().mHwc->registerCallback(this, getBE().mComposerSequenceId); / / for the first time in this method is invalid, the SF, restart found to have the screen plugged in, not usually came in processDisplayHotplugEventsLocked (); / / not link for the first time come in the Display of the Binder object, skip getDefaultDisplayDeviceLocked () - > makeCurrent (); // Open the vr function related module... mEventControlThread = std::make_unique<impl::EventControlThread>( [this](bool enabled) { setVsyncEnabled(HWC_DISPLAY_PRIMARY, enabled); }); // initialize our drawing state mDrawingState = mCurrentState; // set initial conditions (e.g. unblank default device) initializeDisplays(); getBE().mRenderEngine->primeCache(); // Inform native graphics APIs whether the present timestamp is supported: if (getHwComposer().hasCapability( HWC2::Capability::PresentFenceIsNotReliable)) { mStartPropertySetThread = new StartPropertySetThread(false); } else { mStartPropertySetThread = new StartPropertySetThread(true); } if (mStartPropertySetThread->Start() ! = NO_ERROR) { ... } mLegacySrgbSaturationMatrix = getBE().mHwc->getDataspaceSaturationMatrix(HWC_DISPLAY_PRIMARY, Dataspace::SRGB_LINEAR);  }Copy the code

A number of important objects are initialized during init:

  • 1. Initialize DispSyncSource
  • 2. Initialize EventThread
  • 3. Initialize the EventQueue listener
  • 4. Render engine initialization
  • 5. Initialize HWComposer
  • 6. EventControlThread initialization
  • 7. Initialize and link the DisplayService

We’re going to show you that you don’t have to go into a lot of depth to understand how each class works and how it works, and I’ll talk to you about that later.

Initialization of DispSyncSource and EventThread

The two are usually talked about together. We can call the initialization in two parts, one for app EventThread and the other for SF EventThread.

The app EventThread
    mEventThreadSource =
            std::make_unique<DispSyncSource>(&mPrimaryDispSync, SurfaceFlinger::vsyncPhaseOffsetNs,
                                             true, "app");
    mEventThread = std::make_unique<impl::EventThread>(mEventThreadSource.get(),
                                                       [this]() { resyncWithRateLimit(); },
                                                       impl::EventThread::InterceptVSyncsCallback(),
                                                       "appEventThread");
Copy the code

As you can see, mEventThread is essentially a DisSyncSource object. Let’s look at its constructor:

class DispSyncSource final : public VSyncSource, private DispSync::Callback { public: DispSyncSource(DispSync* dispSync, nsecs_t phaseOffset, bool traceVsync, const char* name) : mName(name), mValue(0), mTraceVsync(traceVsync), mVsyncOnLabel(String8::format("VsyncOn-%s", name)), mVsyncEventLabel(String8::format("VSYNC-%s", name)), mDispSync(dispSync), mCallbackMutex(), mVsyncMutex(), mPhaseOffset(phaseOffset), mEnabled(false) {} ~DispSyncSource() override = default; . }Copy the code

DispSyncSource phase difference of APP (1000000); DispSyncSource phase difference of APP (1000000);

Initialize EventThread and DispSyncSource of app as parameter

EventThread::EventThread(VSyncSource* src, ResyncWithRateLimitCallback resyncWithRateLimitCallback, InterceptVSyncsCallback interceptVSyncsCallback, const char* threadName) : mVSyncSource(src), mResyncWithRateLimitCallback(resyncWithRateLimitCallback), mInterceptVSyncsCallback(interceptVSyncsCallback) { for (auto& event : mVSyncEvent) { event.header.type = DisplayEventReceiver::DISPLAY_EVENT_VSYNC; event.header.id = 0; event.header.timestamp = 0; event.vsync.count = 0; } mThread = std::thread(&EventThread::threadMain, this); pthread_setname_np(mThread.native_handle(), threadName); pid_t tid = pthread_gettid_np(mThread.native_handle()); // Use SCHED_FIFO to minimize jitter constexpr int EVENT_THREAD_PRIORITY = 2; struct sched_param param = {0}; param.sched_priority = EVENT_THREAD_PRIORITY; if (pthread_setschedparam(mThread.native_handle(), SCHED_FIFO, &param) ! = 0) { ALOGE("Couldn't set SCHED_FIFO for EventThread"); } set_sched_policy(tid, SP_FOREGROUND); }Copy the code

You can see that what is done in this process is very similar to the method of DispSync. First instantiate an internal thread and set its post-start method, and set the thread to FIFO policy and set it to foreground thread, using a higher priority Task.

void EventThread::threadMain() NO_THREAD_SAFETY_ANALYSIS { std::unique_lock<std::mutex> lock(mMutex); while (mKeepRunning) { DisplayEventReceiver::Event event; Vector<sp<EventThread::Connection> > signalConnections; signalConnections = waitForEventLocked(&lock, &event); // dispatch events to listeners... const size_t count = signalConnections.size(); for (size_t i = 0; i < count; i++) { const sp<Connection>& conn(signalConnections[i]); // now see if we still need to report this event status_t err = conn->postEvent(event); if (err == -EAGAIN || err == -EWOULDBLOCK) { // The destination doesn't accept events anymore, it's probably // full. For now, we just drop the events on the floor. // FIXME: Note that some events cannot be dropped and would have // to be re-sent later. // Right-now we don't have the ability to  do this. ALOGW("EventThread: dropping event (%08x) for connection %p", event.header.type, conn.get()); } else if (err < 0) { // handle any other error on the pipe as fatal. the only // reasonable thing to do is to clean-up this connection. // The most common error we'll get here is -EPIPE. removeDisplayEventConnectionLocked(signalConnections[i]); }}}}Copy the code

You can see that during this process, the Connection for eventThreads that are waiting for external links to come in is blocked by waitForEventLocked. Usually after the Choreographer of registration, application will be registered DisplayEventReceiver, at this time will be corresponding DisplayEventReceiverDispatch a callback, which Binder also registers a COnnect for the current object to the EventThread of the SF process. After waking up, postEvent will be called after detection to synchronize the synchronization signal to the App.

During waitForEventLocked wait loops, each time a synchronization signal is emitted, the constructor’s callback, interceptVSyncsCallback, is called:

resyncWithRateLimit();
Copy the code

At the same time, it will judge whether the current synchronization signal is turned on according to the condition.

Have preliminary concept can, have special article after special chat.

Sf EventThread initializes

    mSfEventThreadSource =
            std::make_unique<DispSyncSource>(&mPrimaryDispSync,
                                             SurfaceFlinger::sfVsyncPhaseOffsetNs, true, "sf");

    mSFEventThread =
            std::make_unique<impl::EventThread>(mSfEventThreadSource.get(),
                                                [this]() { resyncWithRateLimit(); },
                                                [this](nsecs_t timestamp) {
                                                    mInterceptor->saveVSyncEvent(timestamp);
                                                },
                                                "sfEventThread");
    mEventQueue->setEventThread(mSFEventThread.get());
    mVsyncModulator.setEventThread(mSFEventThread.get());
Copy the code

The same logic applies to MessageQueue’s setEventThread method in sf

MessageQueue setEventThread
void MessageQueue::setEventThread(android::EventThread* eventThread) {
    if (mEventThread == eventThread) {
        return;
    }

    if (mEventTube.getFd() >= 0) {
        mLooper->removeFd(mEventTube.getFd());
    }

    mEventThread = eventThread;
    mEvents = eventThread->createEventConnection();
    mEvents->stealReceiveChannel(&mEventTube);
    mLooper->addFd(mEventTube.getFd(), 0, Looper::EVENT_INPUT, MessageQueue::cb_eventReceiver,
                   this);
}
Copy the code

In this process, you can see the logic very similar to that of an App. A Connection is first created through createEventConnection for eventThread, which can listen to this link from waitForEventLocked.

sp<BnDisplayEventConnection> EventThread::createEventConnection() const {
    return new Connection(const_cast<EventThread*>(this));
}
Copy the code
EventThread::Connection::Connection(EventThread* eventThread)
      : count(-1), mEventThread(eventThread), mChannel(gui::BitTube::DefaultSize) {}

EventThread::Connection::~Connection() {
    // do nothing here -- clean-up will happen automatically
    // when the main thread wakes up
}

void EventThread::Connection::onFirstRef() {
    // NOTE: mEventThread doesn't hold a strong reference on us
    mEventThread->registerDisplayEventConnection(this);
}
Copy the code

By initializing a BitTube object and registering the current Connection to the EventThread, waitForEvent can listen for new listeners. What is a BitTube?

BitTube

It is essentially a socketpair wrapped. What is a socketpair? Like a pipe, a pair of sockets can communicate with each other. They can write from 1 and read from 0. You can also write from 0 and read from 1. It’s a full-duplex channel.

static const size_t DEFAULT_SOCKET_BUFFER_SIZE = 4 * 1024; BitTube::BitTube(size_t bufsize) { init(bufsize, bufsize); } BitTube::BitTube(DefaultSizeType) : BitTube(DEFAULT_SOCKET_BUFFER_SIZE) {} void BitTube::init(size_t rcvbuf, size_t sndbuf) { int sockets[2]; if (socketpair(AF_UNIX, SOCK_SEQPACKET, 0, sockets) == 0) { size_t size = DEFAULT_SOCKET_BUFFER_SIZE; setsockopt(sockets[0], SOL_SOCKET, SO_RCVBUF, &rcvbuf, sizeof(rcvbuf)); setsockopt(sockets[1], SOL_SOCKET, SO_SNDBUF, &sndbuf, sizeof(sndbuf)); // sine we don't use the "return channel", we keep it small... setsockopt(sockets[0], SOL_SOCKET, SO_SNDBUF, &size, sizeof(size)); setsockopt(sockets[1], SOL_SOCKET, SO_RCVBUF, &size, sizeof(size)); fcntl(sockets[0], F_SETFL, O_NONBLOCK); fcntl(sockets[1], F_SETFL, O_NONBLOCK); mReceiveFd = sockets[0]; mSendFd = sockets[1]; } else { mReceiveFd = -errno; ALOGE("BitTube: pipe creation failed (%s)", strerror(-mReceiveFd)); }}Copy the code

BitTube is set to 0 for accept and 1 for write. It’s actually the same as the pipe.

Then call the stealReceiveChannel of EventThread::Connection:

status_t EventThread::Connection::stealReceiveChannel(gui::BitTube* outChannel) {
    outChannel->setReceiveFd(mChannel.moveReceiveFd());
    return NO_ERROR;
}
int BitTube::getFd() const {
    return mReceiveFd;
}
Copy the code

A direct received FD will be set to public, and Looper will register the receiving end to see if any data is coming from the sending end.

When waitForEvent touches the wait, the Connection postEvent method is called:

status_t EventThread::Connection::postEvent(const DisplayEventReceiver::Event& event) {
    ssize_t size = DisplayEventReceiver::sendEvents(&mChannel, &event, 1);
    return size < 0 ? status_t(size) : status_t(NO_ERROR);
}
Copy the code
ssize_t DisplayEventReceiver::sendEvents(gui::BitTube* dataChannel,
        Event const* events, size_t count)
{
    return gui::BitTube::sendObjects(dataChannel, events, count);
}
Copy the code

This completes process listening from sender to receiver. Dump the listening event to epoll for processing. The MessageQueue callback is entered when data is awakened.

int MessageQueue::cb_eventReceiver(int fd, int events, void* data) {
    MessageQueue* queue = reinterpret_cast<MessageQueue*>(data);
    return queue->eventReceiver(fd, events);
}

int MessageQueue::eventReceiver(int /*fd*/, int /*events*/) {
    ssize_t n;
    DisplayEventReceiver::Event buffer[8];
    while ((n = DisplayEventReceiver::getEvents(&mEventTube, buffer, 8)) > 0) {
        for (int i = 0; i < n; i++) {
            if (buffer[i].header.type == DisplayEventReceiver::DISPLAY_EVENT_VSYNC) {
                mHandler->dispatchInvalidate();
                break;
            }
        }
    }
    return 1;
}
Copy the code

MessageQueue then calls dispatchInvalidate in the Handler, which calls the layout refresh callback in SF onMessageReceived.

Initialization of the rendering engine

    getBE().mRenderEngine =
            RE::impl::RenderEngine::create(HAL_PIXEL_FORMAT_RGBA_8888,
                                           hasWideColorDisplay
                                                   ? RE::RenderEngine::WIDE_COLOR_SUPPORT
                                                   : 0);
Copy the code

In SF there is a SurfaceFlingerBE object that is initialized together. A RenderEngine will be created for the SurfaceFlingerBE. In fact, SurfaceFlingerBE can be thought of as a shadow of SF, which controls the interfaces of all hardware parts. SF is the refresh mechanism corresponding to the entire Android system.

std::unique_ptr<RenderEngine> RenderEngine::create(int hwcFormat, uint32_t featureFlags) { // initialize EGL for the default display EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY); if (! eglInitialize(display, nullptr, nullptr)) { LOG_ALWAYS_FATAL("failed to initialize EGL"); } GLExtensions& extensions = GLExtensions::getInstance(); extensions.initWithEGLStrings(eglQueryStringImplementationANDROID(display, EGL_VERSION), eglQueryStringImplementationANDROID(display, EGL_EXTENSIONS)); // The code assumes that ES2 or later is available if this extension is // supported. EGLConfig config = EGL_NO_CONFIG; if (! extensions.hasNoConfigContext()) { config = chooseEglConfig(display, hwcFormat, /*logConfig*/ true); } EGLint renderableType = 0; if (config == EGL_NO_CONFIG) { renderableType = EGL_OPENGL_ES2_BIT; } else if (! eglGetConfigAttrib(display, config, EGL_RENDERABLE_TYPE, &renderableType)) { LOG_ALWAYS_FATAL("can't query EGLConfig RENDERABLE_TYPE"); } EGLint contextClientVersion = 0; if (renderableType & EGL_OPENGL_ES2_BIT) { contextClientVersion = 2; } else if (renderableType & EGL_OPENGL_ES_BIT) { contextClientVersion = 1; } else { LOG_ALWAYS_FATAL("no supported EGL_RENDERABLE_TYPEs"); } std::vector<EGLint> contextAttributes; contextAttributes.reserve(6); contextAttributes.push_back(EGL_CONTEXT_CLIENT_VERSION); contextAttributes.push_back(contextClientVersion); bool useContextPriority = overrideUseContextPriorityFromConfig(extensions.hasContextPriority()); if (useContextPriority) { contextAttributes.push_back(EGL_CONTEXT_PRIORITY_LEVEL_IMG); contextAttributes.push_back(EGL_CONTEXT_PRIORITY_HIGH_IMG); } contextAttributes.push_back(EGL_NONE); EGLContext ctxt = eglCreateContext(display, config, nullptr, contextAttributes.data()); // if can't create a GL context, we can only abort. LOG_ALWAYS_FATAL_IF(ctxt == EGL_NO_CONTEXT, "EGLContext creation failed"); // now figure out what version of GL did we actually get // NOTE: a dummy surface is not needed if KHR_create_context is supported EGLConfig dummyConfig = config; if (dummyConfig == EGL_NO_CONFIG) { dummyConfig = chooseEglConfig(display, hwcFormat, /*logConfig*/ true); } EGLint attribs[] = {EGL_WIDTH, 1, EGL_HEIGHT, 1, EGL_NONE, EGL_NONE}; EGLSurface dummy = eglCreatePbufferSurface(display, dummyConfig, attribs); LOG_ALWAYS_FATAL_IF(dummy == EGL_NO_SURFACE, "can't create dummy pbuffer"); EGLBoolean success = eglMakeCurrent(display, dummy, dummy, ctxt); LOG_ALWAYS_FATAL_IF(! success, "can't make dummy pbuffer current"); extensions.initWithGLStrings(glGetString(GL_VENDOR), glGetString(GL_RENDERER), glGetString(GL_VERSION), glGetString(GL_EXTENSIONS)); GlesVersion version = parseGlesVersion(extensions.getVersion()); // initialize the renderer while GL is current std::unique_ptr<RenderEngine> engine; switch (version) { case GLES_VERSION_1_0: case GLES_VERSION_1_1: LOG_ALWAYS_FATAL("SurfaceFlinger requires OpenGL ES 2.0 minimum to run."); break; case GLES_VERSION_2_0: case GLES_VERSION_3_0: engine = std::make_unique<GLES20RenderEngine>(featureFlags); break; } engine->setEGLHandles(display, config, ctxt); . eglMakeCurrent(display, EGL_NO_SURFACE, EGL_NO_SURFACE, EGL_NO_CONTEXT); eglDestroySurface(display, dummy); return engine; }Copy the code

This is actually a classic openGL ES initialization process. It does several things:

  • 1. Initialize EGLDisplay to obtain the default display object of the current system
  • 2. Initialize the EGL version
  • 3. Choose seeglConfig to handle EGL configuration. The logic here is interesting: find the number of EGL configurations that can be returned by eglGetConfigs, then call eglChooseConfig to get the most recommended configuration array from all configuration items, and finally add all configurations in the system that match the current configuration item through the query
  • EglCreateContext initializes the EGL context
  • 5. Set the GL version
  • 6. Create a dump eglCreatePbufferSurface Surface, open up a frame can be cached data space, and used the eglMakeCurrent EGLDisplay and dump link, its purpose is to check whether there is OpenGL es.
  • 6. SetEGLHandles sets EGLDisplay, context and configuration to global configuration
  • 7. EglMakeCurrent set EGLDisplay to the current OpenGL ES environment and destroy the dump Surface.

From here, we can fully see how an industrial-grade OpenGL ES is initialized.

Initialize HWComposer

This object is very, very important because it connects the hardware abstraction layer, the hardware layer and SF, as the core class for drawing. Can be considered as HardwareCompose hardware composition. This is what we call HWC.

    getBE().mHwc.reset(
            new HWComposer(std::make_unique<Hwc2::impl::Composer>(getBE().mHwcServiceName)));
    getBE().mHwc->registerCallback(this, getBE().mComposerSequenceId);
Copy the code

This article will not go into too much detail about how HWC connects to Hal, the hardware abstraction layer. Let’s take a quick look at the logic. Hwc2:: Impl ::Composer = Hwc2::impl::Composer

Composer::Composer(const std::string& serviceName) : mWriter(kWriterInitialSize), mIsUsingVrComposer(serviceName == std::string("vr")) { mComposer = V2_1::IComposer::getService(serviceName); mComposer->createClient( [&](const auto& tmpError, const auto& tmpClient) { if (tmpError == Error::NONE) { mClient = tmpClient; }}); . // 2.2 support is optional sp<IComposer> comPOser_2_2 = IComposer:: composer; // 2.2 Support is optional sp<IComposer> composer_2_2 = IComposer:: composer; if (composer_2_2 ! = nullptr) { mClient_2_2 = IComposerClient::castFrom(mClient); . } if (mIsUsingVrComposer) { sp<IVrComposerClient> vrClient = IVrComposerClient::castFrom(mClient); . }}Copy the code

You can see that another mComposer object is held in the Composer object. This object can be thought of as an IComposer interface object like Binder delivered from an Abstraction layer (Hal) server. Then all you need to do to interact with the hardware layer is manipulate the IComposer object. Then call IComposer’s createClient to create a Client object. If there is a 2.2 IComposer object in it, the 2.1 IComposer will be converted.

In this way, software layer Composer corresponds to hardware abstraction layer Composer. Wait for HWComposer operations. With that in mind, the next article will look at how the hardware abstraction layer comes to the software layer. Next I will refer to the hardware abstraction layer as Hal layer for short.

HWComposer non-HAL layer initialization and listening
HWComposer::HWComposer(std::unique_ptr<android::Hwc2::Composer> composer)
      : mHwcDevice(std::make_unique<HWC2::Device>(std::move(composer))) {}
Copy the code

In this structure you can see that HWComposer also initializes a HWC2::Device object, which is the corresponding object that actually operates on Hal layer. The HWC2::Device object is simple enough to skip with little logic.

Device::Device(std::unique_ptr<android::Hwc2::Composer> composer) : mComposer(std::move(composer)) {
    loadCapabilities();
}
Copy the code
HWComposer listening in
getBE().mHwc->registerCallback(this, getBE().mComposerSequenceId);
Copy the code

The SF listener is then registered into the HWC.

void HWComposer::registerCallback(HWC2::ComposerCallback* callback,
                                  int32_t sequenceId) {
    mHwcDevice->registerCallback(callback, sequenceId);
}
Copy the code
void Device::registerCallback(ComposerCallback* callback, int32_t sequenceId) {
    if (mRegisteredCallback) {
        ALOGW("Callback already registered. Ignored extra registration "
                "attempt.");
        return;
    }
    mRegisteredCallback = true;
    sp<ComposerCallbackBridge> callbackBridge(
            new ComposerCallbackBridge(callback, sequenceId));
    mComposer->registerCallback(callbackBridge);
}
Copy the code

HWC uses a ComposerCallbackBridge object to register objects with Hal for listening. This object is similar to ServiceDispatcher and ReceiverDispatcher. Here you go to the Hal layer and call back using a wrapped Hal object that can be passed into the underlying protocol.

class ComposerCallbackBridge : public Hwc2::IComposerCallback {
public:
    ComposerCallbackBridge(ComposerCallback* callback, int32_t sequenceId)
            : mCallback(callback), mSequenceId(sequenceId) {}

    Return<void> onHotplug(Hwc2::Display display,
                           IComposerCallback::Connection conn) override
    {
        HWC2::Connection connection = static_cast<HWC2::Connection>(conn);
        mCallback->onHotplugReceived(mSequenceId, display, connection);
        return Void();
    }

    Return<void> onRefresh(Hwc2::Display display) override
    {
        mCallback->onRefreshReceived(mSequenceId, display);
        return Void();
    }

    Return<void> onVsync(Hwc2::Display display, int64_t timestamp) override
    {
        mCallback->onVsyncReceived(mSequenceId, display, timestamp);
        return Void();
    }

private:
    ComposerCallback* mCallback;
    int32_t mSequenceId;
};
Copy the code

The Hal callback is called onRefreshReceived, onVsyncReceived, onHotplugReceived, onVsyncReceived, onHotplugReceived. Remember the top SF UML diagram? SF actually implements ComposerCallback. In other words, SF connects Hal layer through ComposerBridge.

This can be done from Hal notification to the software layer.

Initialization of EventControlThread

EventControlThread::EventControlThread(EventControlThread::SetVSyncEnabledFunction function) : mSetVSyncEnabled(function) { pthread_setname_np(mThread.native_handle(), "EventControlThread"); pid_t tid = pthread_gettid_np(mThread.native_handle()); setpriority(PRIO_PROCESS, tid, ANDROID_PRIORITY_URGENT_DISPLAY); set_sched_policy(tid, SP_FOREGROUND); } void EventControlThread::threadMain() NO_THREAD_SAFETY_ANALYSIS { auto keepRunning = true; auto currentVsyncEnabled = false; while (keepRunning) { mSetVSyncEnabled(currentVsyncEnabled); std::unique_lock<std::mutex> lock(mMutex); mCondition.wait(lock, [this, currentVsyncEnabled, keepRunning]() NO_THREAD_SAFETY_ANALYSIS { return currentVsyncEnabled ! = mVsyncEnabled || keepRunning ! = mKeepRunning; }); currentVsyncEnabled = mVsyncEnabled; keepRunning = mKeepRunning; }}Copy the code

You can see that EventControlThread controls the entire HWC EventThread. Because mSetVSyncEnabled actually corresponds to

setVsyncEnabled(HWC_DISPLAY_PRIMARY, enabled);
Copy the code

But it’s much simpler because it’s only responsible for HWC.

InitializeDisplays is ready to initialize display data asynchronous messages

void SurfaceFlinger::initializeDisplays() { class MessageScreenInitialized : public MessageBase { SurfaceFlinger* flinger; public: explicit MessageScreenInitialized(SurfaceFlinger* flinger) : flinger(flinger) { } virtual bool handler() { flinger->onInitializeDisplays(); return true; }}; sp<MessageBase> msg = new MessageScreenInitialized(this); postMessageAsync(msg); // we may be called from main thread, use async message }Copy the code

This is very common in the bottom layer, you can see the Handler callback mechanism intuitively.

status_t SurfaceFlinger::postMessageAsync(const sp<MessageBase>& msg,
        nsecs_t reltime, uint32_t /* flags */) {
    return mEventQueue->postMessage(msg, reltime);
}
Copy the code

As you can see, the asynchrony of SF will not be called until the next Looper returns. Remember when the Looper in the mEventQueue was only initialized and not looped up for listening? You need to wait until the mEventQueue is Looper. Therefore, the callback will not immediately go into the Handler. It’s going to move on to the next step.

The run of SF,

At this point SF’s init method is done. After the startDisplayService registers the Hal layer DisplayService method, then the surfaceFlinger run method is continued in main_surfaceFlinger.

void SurfaceFlinger::run() {
    do {
        waitForEvent();
    } while (true);
}

void SurfaceFlinger::waitForEvent() {
    mEventQueue->waitMessage();
}
Copy the code
void MessageQueue::waitMessage() {
    do {
        IPCThreadState::self()->flushCommands();
        int32_t ret = mLooper->pollOnce(-1);
        switch (ret) {
            case Looper::POLL_WAKE:
            case Looper::POLL_CALLBACK:
                continue;
            case Looper::POLL_ERROR:
                ALOGE("Looper::POLL_ERROR");
                continue;
            case Looper::POLL_TIMEOUT:
                // timeout (should not happen)
                continue;
            default:
                // should not happen
                ALOGE("Looper::pollOnce() returned unknown status %d", ret);
                continue;
        }
    } while (true);
}
Copy the code

After entering this loop, messages from Binder are processed and epoll waits for someone to wake up epoll.

SF Initializes the display module

Have you ever wondered why SF does this? Why loop the mEventQueue messages ahead of time when init? But wait until the end to run? The purpose is simply to do one thing: to register ComposerCallback in the hardware layer for callbacks. Wait until after the callback to execute the asynchronous message that was just executed because it was run.

In the next article, I’ll take you through how Qualcomm MSM8996 implements callbacks at the Hal layer. It’s common sense to assume that when a phone is powered up, the screen driver is one of the first commonly used services (other than the Linux kernel) to boot. Therefore, when SF is started, there must be a notification prepared at the bottom that will be called back once SF registers a listener. Remember that the callback in ComposerCallbackBridge should actually be the SF equivalent of the screen hot-plug callback: onHotplugReceived.

void SurfaceFlinger::onHotplugReceived(int32_t sequenceId, hwc2_display_t display, HWC2::Connection connection) { ALOGV("onHotplugReceived(%d, %" PRIu64 ", %s)", sequenceId, display, connection == HWC2::Connection::Connected ? "connected" : "disconnected"); // Ignore events that do not have the right sequenceId. if (sequenceId ! = getBE().mComposerSequenceId) { return; } ConditionalLock lock(mStateLock, std::this_thread::get_id() ! = mMainThreadId); mPendingHotplugEvents.emplace_back(HotplugEvent{display, connection}); if (std::this_thread::get_id() == mMainThreadId) { // Process all pending hot plug events immediately if we are on the main thread. processDisplayHotplugEventsLocked(); } setTransactionFlags(eDisplayTransactionNeeded); }Copy the code

From this code section, you can see that HWC2::Connection represents the current link state, and hwC2_Display_t is a structure from Hal layer that represents the screen. At this point, build a new structure and press it into the mPendingHotplugEvents collection. Wait for SF to digest the data in this set

There may be two different threads: one from hwBinder(which is actually a Binder driver, just to talk to hardware), and one from your own thread. We will default is the current thread can, will be executed processDisplayHotplugEventsLocked.

void SurfaceFlinger::processDisplayHotplugEventsLocked() { for (const auto& event : mPendingHotplugEvents) { auto displayType = determineDisplayType(event.display, event.connection); if (displayType == DisplayDevice::DISPLAY_ID_INVALID) { continue; } if (getBE().mHwc->isUsingVrComposer() && displayType == DisplayDevice::DISPLAY_EXTERNAL) { continue; } getBE().mHwc->onHotplug(event.display, displayType, event.connection); if (event.connection == HWC2::Connection::Connected) { if (! mBuiltinDisplays[displayType].get()) { mBuiltinDisplays[displayType] = new BBinder(); // All non-virtual displays are currently considered secure. DisplayDeviceState info(displayType, true); info.displayName = displayType == DisplayDevice::DISPLAY_PRIMARY ? "Built-in Screen" : "External Screen"; mCurrentState.displays.add(mBuiltinDisplays[displayType], info); mInterceptor->saveDisplayCreation(info); } } else { ssize_t idx = mCurrentState.displays.indexOfKey(mBuiltinDisplays[displayType]); if (idx >= 0) { const DisplayDeviceState& info(mCurrentState.displays.valueAt(idx)); mInterceptor->saveDisplayDeletion(info.displayId); mCurrentState.displays.removeItemsAt(idx); } mBuiltinDisplays[displayType].clear(); } processDisplayChangesLocked(); } mPendingHotplugEvents.clear(); }Copy the code
  • 1. In this case, mHwc->onHotplug is called to proactively notify HWComposer.
  • 2. If the display tells SF that a screen has been inserted, the mBuiltinDisplays need the screen ID as index, assign a BBinder to the mBuiltinDisplays, and wait for the DisplayerManagerService call. And save the current state. Add the current screen to mCurrentState’s display
  • 3. If the screen is off or pulled out, the mBuiltinDisplays displays information is destroyed.
  • 4. ProcessDisplayChangesLocked treatment after screen state change.

A quick note here is that mBuiltinDisplays is essentially an array of size two. But set two different types:

  • DISPLAY_PRIMARY a value of 0 refers to the home screen
  • DISPLAY_EXTERNAL a value of 1 indicates additional screens

HWComposer actively processes onHotplug

void HWComposer::onHotplug(hwc2_display_t displayId, int32_t displayType, HWC2::Connection connection) { if (displayType >= HWC_NUM_PHYSICAL_DISPLAY_TYPES) { return; } mHwcDevice->onHotplug(displayId, connection); if (connection == HWC2::Connection::Connected) { mDisplayData[displayType].hwcDisplay = mHwcDevice->getDisplayById(displayId); mHwcDisplaySlots[displayId] = displayType; }}Copy the code

And you can see that in this process you’re actually saving the screen on the link into two arrays mDisplayData and mHwcDisplaySlots. But the core still delegates things to the low-level mHwcDevice. After reading above, this is essentially HWC::Device:

void Device::onHotplug(hwc2_display_t displayId, Connection connection) {
    if (connection == Connection::Connected) {
        auto oldDisplay = getDisplayById(displayId);
        if (oldDisplay != nullptr && oldDisplay->isConnected()) {
            ALOGI("Hotplug connecting an already connected display."
                    " Clearing old display state.");
        }
        mDisplays.erase(displayId);

        DisplayType displayType;
        auto intError = mComposer->getDisplayType(displayId,
                reinterpret_cast<Hwc2::IComposerClient::DisplayType *>(
                        &displayType));
        auto error = static_cast<Error>(intError);
        if (error != Error::None) {
...
            return;
        }

        auto newDisplay = std::make_unique<Display>(
                *mComposer.get(), mCapabilities, displayId, displayType);
        newDisplay->setConnected(true);
        mDisplays.emplace(displayId, std::move(newDisplay));
    } else if (connection == Connection::Disconnected) {
        auto display = getDisplayById(displayId);
        if (display) {
            display->setConnected(false);
        } else {
            ...
        }
    }
}
Copy the code

In fact, the logic is very simple, is to record a screen that has not been linked, and initialize a screen object HWC::Display, and set the state. And saved to mDisplays. This completes the mapping of hardware objects from the hardware layer to the software layer.

Based on the limited information, we know that hwC2_display_t corresponds to an HWC::Display object. But is this really the object at the Hal layer? Wait until later.

ProcessDisplayChangesLocked trying to screen Surface distribution figure yuan

void SurfaceFlinger::processDisplayChangesLocked() { const KeyedVector<wp<IBinder>, DisplayDeviceState>& curr(mCurrentState.displays); const KeyedVector<wp<IBinder>, DisplayDeviceState>& draw(mDrawingState.displays); if (! curr.isIdenticalTo(draw)) { mVisibleRegionsDirty = true; const size_t cc = curr.size(); size_t dc = draw.size(); for (size_t i = 0; i < dc;) { const ssize_t j = curr.indexOfKey(draw.keyAt(i)); if (j < 0) { const sp<const DisplayDevice> defaultDisplay(getDefaultDisplayDeviceLocked()); if (defaultDisplay ! = nullptr) defaultDisplay->makeCurrent(); sp<DisplayDevice> hw(getDisplayDeviceLocked(draw.keyAt(i))); if (hw ! = nullptr) hw->disconnect(getHwComposer()); if (draw[i].type < DisplayDevice::NUM_BUILTIN_DISPLAY_TYPES) mEventThread->onHotplugReceived(draw[i].type, false); mDisplays.removeItem(draw.keyAt(i)); } else { ... ++i; } // find displays that were added // (ie: in current state but not in drawing state) for (size_t i = 0; i < cc; i++) { if (draw.indexOfKey(curr.keyAt(i)) < 0) { const DisplayDeviceState& state(curr[i]); sp<DisplaySurface> dispSurface; sp<IGraphicBufferProducer> producer; sp<IGraphicBufferProducer> bqProducer; sp<IGraphicBufferConsumer> bqConsumer; mCreateBufferQueue(&bqProducer, &bqConsumer, false); int32_t hwcId = -1; if (state.isVirtualDisplay()) { ... } else { hwcId = state.type; dispSurface = new FramebufferSurface(*getBE().mHwc, hwcId, bqConsumer); producer = bqProducer; } const wp<IBinder>& display(curr.keyAt(i)); if (dispSurface ! = nullptr) { mDisplays.add(display, setupNewDisplayDeviceInternal(display, hwcId, state, dispSurface, producer)); if (! state.isVirtualDisplay()) { mEventThread->onHotplugReceived(state.type, true); } } } } } mDrawingState.displays = mCurrentState.displays; }Copy the code

This is a very long method, so I’m going to skip some of it, but I’m going to focus on the rest of it because it follows the logic of the first initialization.

You can see the State with two screens in SF. One is mCurrentState which currently holds all display data, in other words, the current state of SF. The other is mDrawingState, which is the state that SF needs to draw.

SF each draw will only draw the screen and primitives in mDrawingState, not mCurrentState. But every time you execute this method it sets display in mDrawingState to display in mCurrentState.

So the first callback will definitely not find the display data in mDrawingState. I’m going to call getDisplayDeviceLocked and try to set the default screen which is the main screen to be the current screen that you’re drawing. Close other screens. But at this point, the home screen is not even initialized, so this logic is useless.

The core logic is going through mCurrentState down here. We don’t care about the virtual screen, we just care about connecting to the home screen.

During this process, there is a very important object called mCreateBufferQueue, which is the primitive queue I talked about in the previous chapter, which contains producers and consumers, and which is assigned to the FramebufferSurface.

In other words, when we need to think about meta consumption logic, we need to start with the FramebufferSurface.

Finally must carry on the very core of logic, to display the corresponding BBinder and through setupNewDisplayDeviceInternal production DisplayService set by this map mDisplays cache.

If, if not a virtual screen, onHotplugReceived of EventThread is called at this point. HotPlugin callbacks like ComposerCallback are not to be confused here. EventThread sends vysNC to wake up the waitForEvent block:

void EventThread::onHotplugReceived(int type, bool connected) { std::lock_guard<std::mutex> lock(mMutex); if (type < DisplayDevice::NUM_BUILTIN_DISPLAY_TYPES) { DisplayEventReceiver::Event event; event.header.type = DisplayEventReceiver::DISPLAY_EVENT_HOTPLUG; event.header.id = type; event.header.timestamp = systemTime(); event.hotplug.connected = connected; mPendingEvents.add(event); mCondition.notify_all(); }}Copy the code

You can see that the blocking thread mCondition is invoked. I’ll talk to you later.

setupNewDisplayDeviceInternal

sp<DisplayDevice> SurfaceFlinger::setupNewDisplayDeviceInternal(
        const wp<IBinder>& display, int hwcId, const DisplayDeviceState& state,
        const sp<DisplaySurface>& dispSurface, const sp<IGraphicBufferProducer>& producer) {
    bool hasWideColorGamut = false;
    std::unordered_map<ColorMode, std::vector<RenderIntent>> hwcColorModes;

    if (hasWideColorDisplay) {
        std::vector<ColorMode> modes = getHwComposer().getColorModes(hwcId);
        for (ColorMode colorMode : modes) {
            switch (colorMode) {
                case ColorMode::DISPLAY_P3:
                case ColorMode::ADOBE_RGB:
                case ColorMode::DCI_P3:
                    hasWideColorGamut = true;
                    break;
                default:
                    break;
            }

            std::vector<RenderIntent> renderIntents = getHwComposer().getRenderIntents(hwcId,
                                                                                       colorMode);
            hwcColorModes.emplace(colorMode, renderIntents);
        }
    }

    HdrCapabilities hdrCapabilities;
    getHwComposer().getHdrCapabilities(hwcId, &hdrCapabilities);

    auto nativeWindowSurface = mCreateNativeWindowSurface(producer);
    auto nativeWindow = nativeWindowSurface->getNativeWindow();

    /*
     * Create our display's surface
     */
    std::unique_ptr<RE::Surface> renderSurface = getRenderEngine().createSurface();
    renderSurface->setCritical(state.type == DisplayDevice::DISPLAY_PRIMARY);
    renderSurface->setAsync(state.type >= DisplayDevice::DISPLAY_VIRTUAL);
    renderSurface->setNativeWindow(nativeWindow.get());
    const int displayWidth = renderSurface->queryWidth();
    const int displayHeight = renderSurface->queryHeight();
    if (state.type >= DisplayDevice::DISPLAY_VIRTUAL) {
        nativeWindow->setSwapInterval(nativeWindow.get(), 0);
    }

    // virtual displays are always considered enabled
    auto initialPowerMode = (state.type >= DisplayDevice::DISPLAY_VIRTUAL) ? HWC_POWER_MODE_NORMAL
                                                                           : HWC_POWER_MODE_OFF;

    sp<DisplayDevice> hw =
            new DisplayDevice(this, state.type, hwcId, state.isSecure, display, nativeWindow,
                              dispSurface, std::move(renderSurface), displayWidth, displayHeight,
                              hasWideColorGamut, hdrCapabilities,
                              getHwComposer().getSupportedPerFrameMetadata(hwcId),
                              hwcColorModes, initialPowerMode);

    if (maxFrameBufferAcquiredBuffers >= 3) {
        nativeWindowSurface->preallocateBuffers();
    }

    ColorMode defaultColorMode = ColorMode::NATIVE;
    Dataspace defaultDataSpace = Dataspace::UNKNOWN;
    if (hasWideColorGamut) {
        defaultColorMode = ColorMode::SRGB;
        defaultDataSpace = Dataspace::SRGB;
    }
    setActiveColorModeInternal(hw, defaultColorMode, defaultDataSpace,
                               RenderIntent::COLORIMETRIC);
    if (state.type < DisplayDevice::DISPLAY_VIRTUAL) {
        hw->setActiveConfig(getHwComposer().getActiveConfigIndex(state.type));
    }
    hw->setLayerStack(state.layerStack);
    hw->setProjection(state.orientation, state.viewport, state.frame);
    hw->setDisplayName(state.displayName);

    return hw;
}
Copy the code

There are two core structures here. One is to build a nativeWindow through the primitive producer, and let RenderEngine generate a RE::Surface object, and obtain width and height information from RE::Surface, layer stack, And collect them all to the DisplayDevice.

Data structure is a bit messy, when we need to find a screen pixel rendering Surface how to find? In summary, mDisplays finds DisplayDevice through displayID, which can find the corresponding nativeWindow, and finally, E::Surface.

Ok, with the display data structure ready, let’s see what the message does in the wait queue.

onInitializeDisplays

void SurfaceFlinger::onInitializeDisplays() {
    // reset screen orientation and use primary layer stack
    Vector<ComposerState> state;
    Vector<DisplayState> displays;
    DisplayState d;
    d.what = DisplayState::eDisplayProjectionChanged |
             DisplayState::eLayerStackChanged;
    d.token = mBuiltinDisplays[DisplayDevice::DISPLAY_PRIMARY];
    d.layerStack = 0;
    d.orientation = DisplayState::OrientationDefault;
    d.frame.makeInvalid();
    d.viewport.makeInvalid();
    d.width = 0;
    d.height = 0;
    displays.add(d);
    setTransactionState(state, displays, 0);
    setPowerModeInternal(getDisplayDevice(d.token), HWC_POWER_MODE_NORMAL,
                         /*stateLockHeld*/ false);

    const auto& activeConfig = getBE().mHwc->getActiveConfig(HWC_DISPLAY_PRIMARY);
    const nsecs_t period = activeConfig->getVsyncPeriod();
    mAnimFrameTracker.setDisplayRefreshPeriod(period);

    setCompositorTimingSnapped(0, period, 0);
}
Copy the code

SetTransactionState sets and adjusts the state of the display property in mCurrentState. Here because of space reason, wait until meet again chat.

conclusion

This article dissects the initialization of SurfaceFlinger on THE Hal layer of SF in detail, giving us a general impression of SF as a whole. Understand what each key role will be responsible for?

    1. PrimaryDispSync EventThread MessageQueue consists of the logic for sending Vysnc synchronization signals.

  • 2.SF has an SFBE as an object for hardware-oriented device operations. It contains two extremely important roles, HWComposer and RenderEngine. HWComposer wraps the medium for Hal to communicate with, HWC::Device, and HWComposer registers SF with Hal layer and waits for hardware callback. RenderEngine prepares the image carrier for rendering primitives, and prepares the environment related to OpenGL ES.

Summarize SF with a graph.

By remembering this picture, we can have a general impression of SF. After analyzing the details of SF, we will learn and explore from a certain point of this picture. Author: yjy239 links: www.jianshu.com/p/9dac91bbb… The copyright of the book belongs to the author. Commercial reprint please contact the author for authorization, non-commercial reprint please indicate the source.