What’s the frame rate
FPS is defined in the graphics field as the number of frames per second an image transmits, which is generally referred to as the number of frames per second of animation or video
What is the effect of frame rate
We use the original words of “Software Green Alliance Application Experience Standard 3.0” interpreted by Huawei to explain: the refresh frame rate of an application interface, especially when sliding, will bring a feeling of lag if the frame rate is low, so keeping a relatively high frame rate will bring a smoother experience. And the lower the refresh frequency, the more severe the image flicker and shake, the faster the eye fatigue.
So what affects the frame rate
Today’s flagship Android phones have a refresh rate of 120HzZ, but do we really need that high refresh rate? In fact, our requirements for frame rate vary from scene to scene. A movie can be normally watched at 24Hz, while a game needs to be at least 30Hz, and a smooth feeling needs to be >60Hz. This is why Honor of Kings needs to be around 60Hz to avoid being stuck. So what is it that affects frame rates?
- Graphics card, the higher the FPS, the higher the graphics card capacity
- Resolution, the lower the resolution, the easier it is to achieve higher frame rates
There is a formula that provides the method of calculating the frame rate: the processing power of the graphics card = resolution × frame rate. For example, if the resolution is 1024×768 and the frame rate is 24 frames per second, then you need: The graphics card processing capacity of 18.87 million pixels per second, to reach 50HZ requires 39 million pixels processing capacity.
Frame rate is more or less normal
The software Green Alliance application Experience Standard 3.0 – Performance Standard mentions the following standards:
- For normal applications, the frame rate should be ≥55fps
- Games, maps, and videos should have a frame rate of 25fps or more
Don’t games feel smooth when they reach 60Hz? I see green Alliance has very low requirements for games, haha.
How do you monitor frame rate on an Android phone?
Given that maintaining a high frame rate is so important to the user experience, how do we monitor our phone’s frame rate in real time? One of the easiest ways to do this is in developer mode on the phone. Open HWHI Display Mode Analysis, select the screen bar chart, or adb Shell output. The details are here:Use GPU rendering mode analysis tool for analysis But that doesn’t seem to be our focus this time. We’d rather look at it from a code point of view. So how do we do that? The answer is Choreographer, a class provided by Google in Android API 16 that calls back to the doFrame method via the FrameCallBack interface before each frame is drawn and provides a time (nanoseconds) for the current frame to start drawing as follows:
/**
* Implement this interface to receive a callback when a new display frame is
* being rendered. The callback is invoked on the {@link Looper} thread to
* which the {@link Choreographer} is attached.
*/
public interface FrameCallback {
/**
* Called when a new display frame is being rendered.
* <p>
* This method provides the time in nanoseconds when the frame started being rendered.
* The frame time provides a stable time base for synchronizing animations
* and drawing. It should be used instead of {@link SystemClock#uptimeMillis()}
* or {@link System#nanoTime()} for animations and drawing in the UI. Using the frame
* time helps to reduce inter-frame jitter because the frame time is fixed at the time
* the frame was scheduled to start, regardless of when the animations or drawing
* callback actually runs. All callbacks that run as part of rendering a frame will
* observe the same frame time so using the frame time also helps to synchronize effects
* that are performed by different callbacks.
* </p><p>
* Please note that the framework already takes care to process animations and
* drawing using the frame time as a stable time base. Most applications should
* not need to use the frame time information directly.
* </p>
*
* @param frameTimeNanos The time in nanoseconds when the frame started being rendered,
* in the {@link System#nanoTime()} timebase. Divide this value by {@code 1000000}
* to convert it to the {@link SystemClock#uptimeMillis()} time base.
*/
public void doFrame(long frameTimeNanos);
}
Copy the code
As you can see from the comments, this is exactly what is described above. Let’s dig a little deeper. What’s driving the doFrame callback? Then think about how to figure out the frame rate.
The dynamics behind Choreographer
This needs to understand the principle of Android rendering, Android rendering is also after Google’s long-term iteration, continuous optimization and update, in fact, the whole rendering process is very complex, need a lot of framework support, we know the underlying library such as: Skia or OpenGL. The Flutter is drawn using Skia, but it is 2D and uses a CPU. OpenGL can draw 3D and uses a GPU, and the Surface renders the graphics. All elements are drawn and rendered on the Surface paper. Each Window is associated with a Surface. The WindowManager manages these Windows and passes their data to SurfaceFlinger. SurfaceFlinger takes buffers, composes them, and sends them to the screen. WindowManager provides buffer and window metadata for SurfaceFlinger, while SurfaceFlinger is synthesized by Hardware Composer and output to the display. The Surface will be drawn in many layers. So this is the buffer mentioned above. Prior to Android 4.1, double buffering was used; After Android 4.1, the three-buffer mechanism is used. But that’s not all. Google announced Project Butter at THE 2012 I/O conference and officially turned it on in Android 4.1. VSYNC signal. If you look at a picture first, you will find that once the screen is drawn, the CPU needs to calculate, then the GPU, and finally the Display, VSYNC is like a queue (producer consumer model), one after another is accumulated over time, one after another is output and consumed immediately. We know that the final Buffer data is output to the display screen through SurfaceFlinger, and the function of VSYNC during this period is to plan the rendering process in an orderly manner and reduce the delay. We also know that the time interval of VSYNC is 16ms, which will cause the page drawing to stay in the last frame. So it feels like a frame drop why 16ms? This comes from a calculation formula: 1000ms/60fps is approximately 16ms/1fps, and Choreographer’s doFrame approach is normally a 16ms callback. And it’s powered by this VSYNC signal.
How do I calculate frame rates with Choreographer
Since the doFrame method normally adjusts 16ms once, we can do a calculation based on this feature and look at the code to clarify the whole idea:
// Record the last frame time
private long mLastFrameTime;
Choreographer.getInstance().postFrameCallback(new Choreographer.FrameCallback() {
@Override
public void doFrame(long frameTimeNanos) {
// Reassign the latest frame time every 500 milliseconds
if (mLastFrameTime == 0) {
mLastFrameTime = frameTimeNanos;
}
// The start time of this frame minus the last time divided by 1 million to get the difference of milliseconds
float diff = (frameTimeNanos - mLastFrameTime) / 1000000.0 f;
// The output frame rate is 500 milliseconds
if (diff > 500) {
double fps = (((double) (mFrameCount * 1000L)) / diff);
mFrameCount = 0;
mLastFrameTime = 0;
Log.d("doFrame"."doFrame: " + fps);
} else {
++mFrameCount;
}
// Register to listen for the next vsync signal
Choreographer.getInstance().postFrameCallback(this); }});Copy the code
Why do it in 500 milliseconds? You can do it in one second, you can decide, but the doFrame method, if it gets called back 60 times or so in one second, it’s pretty good. Now that we know how to use code to measure frame rate, let’s start analyzing the frame rate detection code of Matrix. Look at what he did.
Matrix frame rate detection code analysis
The code for the frame rate detection must be in trace Canary. First of all, looking at the overall directory, we find the ITracer abstractionFrameTracer, EvilMethodTracer, FrameTracer, StartupTracer, FrameTracer, FrameTracer, StartupTracer The following code is found: FrameCollectItem, a private class in the FrameTracer class
private class FrameCollectItem {
long sumFrameCost;
int sumFrame = 0;
void report(a) {
// Calculate the frame rate, 1000. F * sumFrame/sumFrameCost formula and our previous one
// Double FPS = ((double) (mFrameCount * 1000L))/diff)
SumFrameCost should be a dynamic time difference constant, ranging from 500 milliseconds to 1 second.
float fps = Math.min(60.f.1000.f * sumFrame / sumFrameCost);
MatrixLog.i(TAG, "[report] FPS:%s %s", fps, toString());
try {
// Generate a Json report.
TracePlugin plugin = Matrix.with().getPluginByClass(TracePlugin.class);
if (null == plugin) {
return;
}
JSONObject dropLevelObject = new JSONObject();
dropLevelObject.put(DropStatus.DROPPED_FROZEN.name(), dropLevel[DropStatus.DROPPED_FROZEN.index]);
dropLevelObject.put(DropStatus.DROPPED_HIGH.name(), dropLevel[DropStatus.DROPPED_HIGH.index]);
dropLevelObject.put(DropStatus.DROPPED_MIDDLE.name(), dropLevel[DropStatus.DROPPED_MIDDLE.index]);
dropLevelObject.put(DropStatus.DROPPED_NORMAL.name(), dropLevel[DropStatus.DROPPED_NORMAL.index]);
dropLevelObject.put(DropStatus.DROPPED_BEST.name(), dropLevel[DropStatus.DROPPED_BEST.index]);
JSONObject dropSumObject = new JSONObject();
dropSumObject.put(DropStatus.DROPPED_FROZEN.name(), dropSum[DropStatus.DROPPED_FROZEN.index]);
dropSumObject.put(DropStatus.DROPPED_HIGH.name(), dropSum[DropStatus.DROPPED_HIGH.index]);
dropSumObject.put(DropStatus.DROPPED_MIDDLE.name(), dropSum[DropStatus.DROPPED_MIDDLE.index]);
dropSumObject.put(DropStatus.DROPPED_NORMAL.name(), dropSum[DropStatus.DROPPED_NORMAL.index]);
dropSumObject.put(DropStatus.DROPPED_BEST.name(), dropSum[DropStatus.DROPPED_BEST.index]);
JSONObject resultObject = new JSONObject();
resultObject = DeviceUtil.getDeviceInfo(resultObject, plugin.getApplication());
resultObject.put(SharePluginInfo.ISSUE_SCENE, visibleScene);
resultObject.put(SharePluginInfo.ISSUE_DROP_LEVEL, dropLevelObject);
resultObject.put(SharePluginInfo.ISSUE_DROP_SUM, dropSumObject);
resultObject.put(SharePluginInfo.ISSUE_FPS, fps);
Issue issue = new Issue();
issue.setTag(SharePluginInfo.TAG_PLUGIN_FPS);
issue.setContent(resultObject);
plugin.onDetectIssue(issue);
} catch (JSONException e) {
MatrixLog.e(TAG, "json error", e);
} finally {
sumFrame = 0;
sumDroppedFrames = 0;
sumFrameCost = 0; }}}Copy the code
I follow this code, to find the place where is called sumFrame, see what you do and can quickly see below, because there is no specific code, reason is that we can know the Choreographer. FrameCallback listen to register, To quickly verify the implementation of Matrix Trace frame rate, we skipped the details and went straight to the core logic and pasted the code.Collect (FPSCollector); collect (FPSCollector)Further up, you see that the doReplay method calls the doReplayInnerAfter continuing, IDoFrameListener is found calling doReplay
FPSCollector is derived from IDoFrameListenerAnd our previous analysis, and found no Choreographer. FrameCallback shadow, calculation is same. I don’t believe you. I have to look up So if you look at this, you see the doFrame function, and it looks like you’ve got the shadow of FrameCallback, but it’s not. Continue to look atThe UIThreadMonitor class has been found, so keep goingFound in init function called, it’s time to look at the code
LooperMonitor.register(new LooperMonitor.LooperDispatchListener() {
@Override
public boolean isValid(a) {
return isAlive;
}
@Override
public void dispatchStart(a) {
super.dispatchStart();
UIThreadMonitor.this.dispatchBegin();
}
@Override
public void dispatchEnd(a) {
super.dispatchEnd();
UIThreadMonitor.this.dispatchEnd(); }});Copy the code
What is LooperMonitor and why does it sense frame rate? What is it?
class LooperMonitor implements MessageQueue.IdleHandler
Copy the code
Messagequue.IdleHandler, which can be used to specify an operation when a thread is idle, can execute the specified operation as long as the thread is idle. This is different from our previous scheme, where we do not consider whether the thread is idle or not, and calculate the frame rate all the time. I kind of know that it doesn’t use FrameCallback at all, it’s computed in a different way, not saying what, but let’s trace LooperDispatchListener again, okayFound a LooperPrinter, it’s distributed, LooperPrinter
class LooperPrinter implements Printer/ / print?public interface Printer {
/** * Write a line of text to the output. There is no need to terminate * the given string with a newline. */
void println(String x);
}
Copy the code
See how LooperPrinter creates an object, find the following referenceLook at the code in detail
private synchronized void resetPrinter(a) {
Printer originPrinter = null;
try {
if(! isReflectLoggingError) { originPrinter = ReflectUtils.get(looper.getClass(),"mLogging", looper);
if (originPrinter == printer && null! = printer) {return; }}}catch (Exception e) {
isReflectLoggingError = true;
Log.e(TAG, "[resetPrinter] %s", e);
}
if (null! = printer) { MatrixLog.w(TAG,"maybe thread:%s printer[%s] was replace other[%s]!",
looper.getThread().getName(), printer, originPrinter);
}
//setMessageLogging is used to record log information in Looper. Loop ().
// LooperPrinter
looper.setMessageLogging(printer = new LooperPrinter(originPrinter));
if (null! = originPrinter) { MatrixLog.i(TAG,"reset printer, originPrinter[%s] in %s", originPrinter, looper.getThread().getName()); }}Copy the code
I followed the Looper and found thisThe main thread Looper, as we all know, is always responsible for the UI refresh, so it takes advantage of the logging mechanism provided by the main thread, and allows for processing data when the thread is idle to monitor the frame rate and so on. Very good design, worth learning from. One detail I noticed is that Choreographer is still using Choreographer to calculate frame rates and use reflection to get field information such as:
// Frame interval time
frameIntervalNanos= ReflectUtils.reflectObject(choreographer, "mFrameIntervalNanos", Constants.DEFAULT_FRAME_DURATION);
//vsync signal is accepted
vsyncReceiver = ReflectUtils.reflectObject(choreographer, "mDisplayEventReceiver".null); The frameTimeNanos in the doFrame function callback above is actually taken from vsyncReceiver.Copy the code
Source screenshots, it seems that calculating frame rates is definitely impossible without ChoreographerSo the question is again.
Why does Looper provide a logging mechanism that calculates frame rates
If you’re wondering as I am, let me show you Choreographer’s source code. Here we go
private static final ThreadLocal<Choreographer> sThreadInstance =
new ThreadLocal<Choreographer>() {
@Override
protected Choreographer initialValue(a) {
Looper looper = Looper.myLooper();
if (looper == null) {
throw new IllegalStateException("The current thread must have a looper!");
}
Choreographer choreographer = new Choreographer(looper, VSYNC_SOURCE_APP);
if (looper == Looper.getMainLooper()) {
mMainInstance = choreographer;
}
returnchoreographer; }};Copy the code
From this code analysis we can conclude: Choreographer is thread private because variables created by ThreadLocal can only be accessed by the current thread, that is, there is one thread for each Choreographer, and the main thread’s Choreographer is mMainInstance. Let’s look at one more piece of code
private Choreographer(Looper looper, int vsyncSource) {
mLooper = looper;
mHandler = new FrameHandler(looper);
mDisplayEventReceiver = USE_VSYNC
? new FrameDisplayEventReceiver(looper, vsyncSource)
: null;
mLastFrameTimeNanos = Long.MIN_VALUE;
mFrameIntervalNanos = (long) (1000000000 / getRefreshRate());
mCallbackQueues = new CallbackQueue[CALLBACK_LAST + 1];
for (int i = 0; i <= CALLBACK_LAST; i++) {
mCallbackQueues[i] = new CallbackQueue();
}
// b/68769804: For low FPS experiments.
setFPSDivisor(SystemProperties.getInt(ThreadedRenderer.DEBUG_FPS_DIVISOR, 1));
}
Copy the code
This is the Choreographer construct, and what we actually see here is that Choreographer is created with Looper, it’s a one-to-one relationship, that is, within a thread there is a Looper and there is a Choreographer, There are two important FrameHandler FrameDisplayEventReceiver now also don’t know why, they used to look down the code
private final class FrameHandler extends Handler {
public FrameHandler(Looper looper) {
super(looper);
}
@Override
public void handleMessage(Message msg) {
switch (msg.what) {
case MSG_DO_FRAME:
doFrame(System.nanoTime(), 0);
break;
case MSG_DO_SCHEDULE_VSYNC:
doScheduleVsync();
break;
case MSG_DO_SCHEDULE_CALLBACK:
doScheduleCallback(msg.arg1);
break; }}}void doFrame(long frameTimeNanos, int frame) {
final long startNanos;
synchronized (mLock) {
if(! mFrameScheduled) {return; // no work to do
}
if (DEBUG_JANK && mDebugPrintNextFrameTimeDelta) {
mDebugPrintNextFrameTimeDelta = false;
Log.d(TAG, "Frame time delta: "
+ ((frameTimeNanos - mLastFrameTimeNanos) * 0.000001 f) + " ms");
}
long intendedFrameTimeNanos = frameTimeNanos;
startNanos = System.nanoTime();
final long jitterNanos = startNanos - frameTimeNanos;
if (jitterNanos >= mFrameIntervalNanos) {
final long skippedFrames = jitterNanos / mFrameIntervalNanos;
if (skippedFrames >= SKIPPED_FRAME_WARNING_LIMIT) {
Log.i(TAG, "Skipped " + skippedFrames + " frames! "
+ "The application may be doing too much work on its main thread.");
}
final long lastFrameOffset = jitterNanos % mFrameIntervalNanos;
if (DEBUG_JANK) {
Log.d(TAG, "Missed vsync by " + (jitterNanos * 0.000001 f) + " ms "
+ "which is more than the frame interval of "
+ (mFrameIntervalNanos * 0.000001 f) + " ms! "
+ "Skipping " + skippedFrames + " frames and setting frame "
+ "time to " + (lastFrameOffset * 0.000001 f) + " ms in the past.");
}
frameTimeNanos = startNanos - lastFrameOffset;
}
if (frameTimeNanos < mLastFrameTimeNanos) {
if (DEBUG_JANK) {
Log.d(TAG, "Frame time appears to be going backwards. May be due to a "
+ "previously skipped frame. Waiting for next vsync.");
}
scheduleVsyncLocked();
return;
}
if (mFPSDivisor > 1) {
long timeSinceVsync = frameTimeNanos - mLastFrameTimeNanos;
if (timeSinceVsync < (mFrameIntervalNanos * mFPSDivisor) && timeSinceVsync > 0) {
scheduleVsyncLocked();
return;
}
}
mFrameInfo.setVsync(intendedFrameTimeNanos, frameTimeNanos);
mFrameScheduled = false;
mLastFrameTimeNanos = frameTimeNanos;
}
try {
Trace.traceBegin(Trace.TRACE_TAG_VIEW, "Choreographer#doFrame");
AnimationUtils.lockAnimationClock(frameTimeNanos / TimeUtils.NANOS_PER_MS);
mFrameInfo.markInputHandlingStart();
doCallbacks(Choreographer.CALLBACK_INPUT, frameTimeNanos);
mFrameInfo.markAnimationsStart();
doCallbacks(Choreographer.CALLBACK_ANIMATION, frameTimeNanos);
mFrameInfo.markPerformTraversalsStart();
doCallbacks(Choreographer.CALLBACK_TRAVERSAL, frameTimeNanos);
doCallbacks(Choreographer.CALLBACK_COMMIT, frameTimeNanos);
} finally {
AnimationUtils.unlockAnimationClock();
Trace.traceEnd(Trace.TRACE_TAG_VIEW);
}
if (DEBUG_FRAMES) {
final long endNanos = System.nanoTime();
Log.d(TAG, "Frame " + frame + ": Finished, took "
+ (endNanos - startNanos) * 0.000001 f + " ms, latency "
+ (startNanos - frameTimeNanos) * 0.000001 f + " ms."); }}Copy the code
FrameHandler receives the message, processes the message, calls Choreographer’s doFrame function, doFrame is not postFrameCallback’s FrameCallback’s doFrame, and after searching through the code, I noticed that this is where the doFrame of FrameCallback is triggered, in the doCallbacks function. This don’t take you to see the details, let’s take a look at the FrameHandler news who is triggered, then see FrameDisplayEventReceiver
private final class FrameDisplayEventReceiver extends DisplayEventReceiver
implements Runnable {
private boolean mHavePendingVsync;
private long mTimestampNanos;
private int mFrame;
public FrameDisplayEventReceiver(Looper looper, int vsyncSource) {
super(looper, vsyncSource);
}
@Override
public void onVsync(long timestampNanos, int builtInDisplayId, int frame) {
// Ignore vsync from secondary display.
// This can be problematic because the call to scheduleVsync() is a one-shot.
// We need to ensure that we will still receive the vsync from the primary
// display which is the one we really care about. Ideally we should schedule
// vsync for a particular display.
// At this time Surface Flinger won't send us vsyncs for secondary displays
// but that could change in the future so let's log a message to help us remember
// that we need to fix this.
if(builtInDisplayId ! = SurfaceControl.BUILT_IN_DISPLAY_ID_MAIN) { Log.d(TAG,"Received vsync from secondary display, but we don't support "
+ "this case yet. Choreographer needs a way to explicitly request "
+ "vsync for a specific display to ensure it doesn't lose track "
+ "of its scheduled vsync.");
scheduleVsync();
return;
}
// Post the vsync event to the Handler.
// The idea is to prevent incoming vsync events from completely starving
// the message queue. If there are no messages in the queue with timestamps
// earlier than the frame time, then the vsync event will be processed immediately.
// Otherwise, messages that predate the vsync event will be handled first.
long now = System.nanoTime();
if (timestampNanos > now) {
Log.w(TAG, "Frame time is " + ((timestampNanos - now) * 0.000001 f)
+ " ms in the future! Check that graphics HAL is generating vsync "
+ "timestamps using the correct timebase.");
timestampNanos = now;
}
if (mHavePendingVsync) {
Log.w(TAG, "Already have a pending vsync event. There should only be "
+ "one at a time.");
} else {
mHavePendingVsync = true;
}
mTimestampNanos = timestampNanos;
mFrame = frame;
Message msg = Message.obtain(mHandler, this);
msg.setAsynchronous(true);
mHandler.sendMessageAtTime(msg, timestampNanos / TimeUtils.NANOS_PER_MS);
}
@Override
public void run(a) {
mHavePendingVsync = false; doFrame(mTimestampNanos, mFrame); }}Copy the code
This function is triggered by the android.view package DisplayEventReceiver, which is actually initialized at the C++ layer and listens for Vsync signals. Is actually made SurfaceFlinger passed, so here I know, FrameDisplayEventReceiver used to receive onVsync signal, and then through the mHandler is above FrameHandler triggers a message passing. Mhandler. obtainMessage(MSG_DO_FRAME) is not set to trigger the doFrame function, right? But look at the Message. Obtain (mHandler, this), here is this FrameDisplayEventReceiver, and implement Runnable FrameDisplayEventReceiver, So will lead to FrameHandler after I receive my message, perform FrameDisplayEventReceiver run function, and this function is called the doFrame, then through. Ok, to summarize why this is possible: Choreographer onVsync message consumption is actually handled through Looper on the thread where Choreographer is working, so we can monitor Looper messages on the main thread as well as the frame rate. That’s the way it works.
summary
- Printer is set in Main Looper for distribution
- The distribution then calculates the frame rate
- The MessageQueue.IdleHandler is used to avoid busy threads and wait for idle processing
That’s about it. If you find anything new, or if I’m wrong, feel free to comment.