PrecomputedText, like literal, is used to precompute text. It was also created because calculating text is a time-consuming operation, requiring calculation based on size, font, style, line feed, etc., and the calculation time increases as the number of words increases. If this happens to be the multi-line text in the list, then scrolling will drop frames, affecting the user experience. For products like Weibo, the list is very complicated.

TextLayoutCache is introduced in Android 4.0 to solve this problem. Each measured text is added to the cache, and the next time the same text is needed, it can be retrieved from the cache instead of being measured. However, the cache size is only 0.5 MB. And before the cache, our first slide was UI thread time consuming. To address these issues, PrecomputedText was added to Android 9.0. The measurement time is said to have been reduced by 95%, for comparison see the link at the end of this article.

Method of use

  • CompileSdkVersion above 28, AppCompat library above 28.0.0 or AndroidX AppCompat 1.0.0

  • Use AppCompatTextView instead of TextView

  • Replace the setText method with setTextFuture

The code is as follows:

The Future < PrecomputedTextCompat > Future = PrecomputedTextCompat. GetTextFuture (" text ", textView.getTextMetricsParamsCompat(),null);

textView.setTextFuture(future);
Copy the code

Of course, if you’re using Kotlin, it’s much more fun to use the extension method.

fun AppCompatTextView.setTextFuture(charSequence: CharSequence){
    this.setTextFuture(PrecomputedTextCompat.getTextFuture(
            charSequence,
            TextViewCompat.getTextMetricsParams(this),
            null))}// One line callTextView. SetTextFuture (" text ")Copy the code

Realize the principle of

The implementation principle of PrecomputedText is simple: time consuming measurements are performed asynchronously.

	@UiThread
    public static Future<PrecomputedTextCompat> getTextFuture(@NonNull CharSequence charSequence, @NonNull PrecomputedTextCompat.Params params, @Nullable Executor executor) {
        PrecomputedTextCompat.PrecomputedTextFutureTask task = new PrecomputedTextCompat.PrecomputedTextFutureTask(params, charSequence);
        if (executor == null) {
            Object var4 = sLock;
            synchronized(sLock) {
                if (sExecutor == null) {
                    sExecutor = Executors.newFixedThreadPool(1);
                }

                executor = sExecutor;
            }
        }

        executor.execute(task);
        return task;
    }
Copy the code

By calling consumeTextFutureAndSetBlocking way of the future. The get () block thread to get the calculation result, eventually setText on the TextView for use.

	public void setTextFuture(@NonNull Future<PrecomputedTextCompat> future) {
        this.mPrecomputedTextFuture = future;
        this.requestLayout();
    }

    protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
        this.consumeTextFutureAndSetBlocking();
        super.onMeasure(widthMeasureSpec, heightMeasureSpec);
    }

	private void consumeTextFutureAndSetBlocking(a) {
        if (this.mPrecomputedTextFuture ! =null) {
            try {
                Future<PrecomputedTextCompat> future = this.mPrecomputedTextFuture;
                this.mPrecomputedTextFuture = null;
                TextViewCompat.setPrecomputedText(this, (PrecomputedTextCompat)future.get());
            } catch(ExecutionException | InterruptedException var2) { ; }}}Copy the code

The new problem

While looking at PrecomputedText, I found a related Demo on Github, which found negative optimization after use.

In this example, there are three AppCompattextViews on one item and the size is very small, resulting in about ten paragraphs of text on one screen. Of course, after using PrecomputedText optimization, the onBindViewHolder method takes a much shorter time to execute. But new problems have been detected.

The first thing to understand is that the faster you slide a list, the more you measure and draw per unit of time. I conducted three speed tests before and after the use, respectively slow speed (1s slide 1 time, small strength), medium speed (1s slide 2 times, medium strength), fast (1s slide 3 times, large strength) got the following conclusion. (Pure manual sliding, really tired…)

I won’t show you all the Systrace results, but here are the results before and after the medium speed slide.

The light green bars representing Animation and Input are raised.

Problem/speed slow Medium speed fast
Scheduling delay 4 – > 46 5 – > 39 8 – > 17
Long View#draw() 18 – > 12 37 – > 30 50 – > 48
Expensive measure/layout pass 1 – > 0 0 0

Scheduling delay is a process in which a thread is not scheduled by the CPU for a long period of time, causing the thread to remain idle for a long period of time.

It can be seen that with the use of PrecomputedText, the Scheduling delay problem will increase with a certain probability, or even become more serious. For example, the following fragment of dequeueBuffer was actually executed by the CPU at 0.119ms, but the total time was 10.035ms.

In fact, if you look closely, the dequeueBuffer has already executed at the beginning, but is waiting for the CPU to schedule the next step. This is actually waiting for SurfaceFlinger to execute. The diagram below:

The time required to notify the CPU will be delayed, resulting in a Scheduling delay. Exactly why the high probability triggers such problems is not clear. The guess is that the text itself is complicated, with different font sizes, colors, styles in one paragraph, and more than a dozen such paragraphs on the page at the same time. In this case, there will be more than a dozen threads in a short period of time to achieve asynchronous text measurement, is bound to have a performance impact.

After I set the text font to a larger size, I found that this problem is much better. So the use of PrecomputedText needs to be scenario-specific, otherwise it will overcorrect.

conclusion

  • Do not abuse PrecomputedText, which is not a big improvement for one or two lines of text and may cause unnecessary Scheduling delay. It is recommended to use text of more than 200 characters.

  • Don’t TextViewCompat. GetTextMetricsParams modify textview properties after () method. For example, set the size in front of the execution.

  • PrecomputedTextCompat above 9.0 uses PrecomputedText optimization, 5.0 to 9.0 uses StaticLayout optimization, and below 5.0 does not process.

  • PrecomputedText is invalid if you have disabled Prefetch for RecyclerView. If you are using a custom LayoutManager, make sure it implements collectAdjacentPrefetchPositions () in order to RecyclerView know to prefetch project. Therefore, ListView does not enjoy the performance optimization provided by PrecomputedText.

Specific kotlin sample can see PrecomputedTextCompatExample, there are optimized method using coroutines. I have also written a Corresponding Java example.

The effect is as follows:

normal PrecomputedText future PrecomputedText coroutine

Finally, if it is helpful to you, I hope to support it!!

reference

  • Use Android Text Like a Pro

  • What is new in Android P? -? PrecomputedText

  • Prefetch Text Layout in RecyclerView