If you were to write your own image-loading framework, what would you consider?

First, comb through the necessary image loading framework requirements:

  • Asynchronous loading: Thread pools
  • Switching threads: Handler, no argument
  • Cache: LruCache, DiskLruCache
  • Prevent OOM: Soft reference, LruCache, image compression, Bitmap pixel storage location
  • Memory leak: Pay attention to proper reference of ImageView, lifecycle management
  • List sliding loading problems: loading disorder, queue too many tasks

Of course, there are also unnecessary requirements, such as loading animations.

2.1 Asynchronous Loading:

Thread pools, how many?

There are three levels of cache: memory cache, hard disk and network.

Because the network is blocked, read memory and hard disk can be placed in one thread pool, the network needs another thread pool, or the network can use Okhttp’s built-in thread pool.

Read disks and read networks need to be handled in separate thread pools, so two thread pools are appropriate.

Glide must also need multiple thread pools, look at the source code is not so

public final class GlideBuilder { ... private GlideExecutor sourceExecutor; Private GlideExecutor diskCacheExecutor; private GlideExecutor diskCacheExecutor; // Load the disk cache thread pool... private GlideExecutor animationExecutor; // Animate the thread poolCopy the code

Glide uses three thread pools, two if animation is not taken into account.

2.2 Switching Threads:

The image is loaded asynchronously, so you need to update the ImageView in the main thread.

Whether it’s RxJava, EventBus, or Glide, you need Handler to switch from a child thread to the Android main thread.

Look at Glide related source code:

class EngineJob<R> implements DecodeJob.Callback<R>,Poolable { private static final EngineResourceFactory DEFAULT_FACTORY = new EngineResourceFactory(); Private static final Handler MAIN_THREAD_HANDLER = new Handler(looper.getMainLooper (), new MainThreadCallback());Copy the code

RxJava is written entirely in Java, so how to switch from child thread to Android main thread? There are still a lot of 3-6 years of development can not answer this very basic question, and as long as this question can not be answered, the following questions about the principle, basically can not answer.

There are many Android developers who have worked for many years do not know Hongyang, Guo Lin, Yu Gang said, do not know what is digging gold, the heart estimates will think whether there is also called digging silver digging iron (I do not know if there is).

What I want to express is that in this line of work, you really need to have the passion for technology and keep learning. You are not afraid that others are better than you, but you are afraid that those who are better than you work harder than you, and you do not know.

2.3 the cache

We often say the image cache: memory cache, hard disk cache, network.

2.3.1 Memory Cache

LruCache is usually used

Glide default memory cache also uses LruCache, but not the Android SDK LruCache, but also based on LinkHashMap, so the principle is the same.

// -> GlideBuilder#build
if (memoryCache == null) {
  memoryCache = new LruResourceCache(memorySizeCalculator.getMemoryCacheSize());
}
Copy the code

Since speaking of LruCache, we must understand the characteristics of LruCache and source code:

Why LruCache?

LruCache uses the least recently used algorithm to set a cache size. When the cache size reaches this size, the oldest data will be removed to avoid the OOM.

LruCache source code analysis
Public class LruCache<K, V> {private final LinkedHashMap<K, V> map; . public LruCache(int maxSize) { if (maxSize <= 0) { throw new IllegalArgumentException("maxSize <= 0"); } this.maxSize = maxSize; // Create a LinkedHashMap, accessOrder pass true this.map = new LinkedHashMap<K, V>(0, 0.75f, true); }...Copy the code

The LruCache constructor creates a LinkedHashMap. The accessOrder parameter is passed to true, indicating that the order of access is sorted. The data store is based on the LinkedHashMap.

Take a look at how LinkedHashMap works

LinkedHashMap extends HashMap from HashMap. The put method is not overwritten. This shows that LinkedHashMap follows HashMap’s array plus list structure.

LinkedHashMap overrides the createEntry method.

Take a look at the createEntry method of the HashMap

void createEntry(int hash, K key, V value, int bucketIndex) {
    HashMapEntry<K,V> e = table[bucketIndex];
    table[bucketIndex] = new HashMapEntry<>(hash, key, value, e);
    size++;
}
Copy the code

The HashMap array contains theHashMapEntryobject

Take a look at the createEntry method of LinkedHashMap

void createEntry(int hash, K key, V value, int bucketIndex) { HashMapEntry<K,V> old = table[bucketIndex]; LinkedHashMapEntry<K,V> e = new LinkedHashMapEntry<>(hash, key, value, old); table[bucketIndex] = e; // Add the array to e.ddbefore (header); // handle the linked list size++; }Copy the code

The array of LinkedHashMap containsLinkedHashMapEntryobject

LinkedHashMapEntry

private static class LinkedHashMapEntry<K,V> extends HashMapEntry<K,V> { // These fields comprise the doubly linked list  used for iteration. LinkedHashMapEntry<K,V> before, after; Private void remove() {before.after = after; after.before = before; } private void addBefore(LinkedHashMapEntry<K,V> existingEntry) { after = existingEntry; before = existingEntry.before; before.after = this; after.before = this; }Copy the code

LinkedHashMapEntry inherits HashMapEntry, adding before and after variables, so it is a two-way linked list structure, and addedaddBeforeandremoveMethod to add and delete linked list nodes.

LinkedHashMapEntry#addBefore adds a data to the Header

private void addBefore(LinkedHashMapEntry<K,V> existingEntry) {
        after  = existingEntry;
        before = existingEntry.before;
        before.after = this;
        after.before = this;
}
Copy the code

ExistingEntry is always passing the linked header. To add a node to the front of the header, you just move the linked list pointer, add new data to the before of the header, and the before of the header is the most recently accessed data. Header after is the oldest data.

Look at the LinkedHashMapEntry# remove

private void remove() {
        before.after = after;
        after.before = before;
    }
Copy the code

To remove a linked list node, change the pointer.

Look again at the Put method of LinkHashMap

public final V put(K key, V value) { V previous; synchronized (this) { putCount++; //size add size += safeSizeOf(key, value); // 1, linkHashMap put method previous = map.put(key, value); if (previous ! Size -= safeSizeOf(key, previous); } } trimToSize(maxSize); return previous; }Copy the code

The LinkedHashMap structure can be represented by such a graph

The put and GET methods of LinkHashMap will finally call the trimToSize method, and LruCache will rewrite the trimToSize method to determine that if the memory exceeds a certain size, the oldest data will be removed

LruCache#trimToSize, remove the oldest data

public void trimToSize(int maxSize) { while (true) { K key; V value; Synchronized (this) {if (size <= maxSize) {break; } // Out of size, remove the oldest data map.entry <K, V> toEvict = map.eldest(); if (toEvict == null) { break; } key = toEvict.getKey(); value = toEvict.getValue(); map.remove(key); SafeSizeOf returns 1 by default; size -= safeSizeOf(key, value); evictionCount++; } entryRemoved(true, key, value, null); }}Copy the code

If you are not familiar with LinkHashMap, see how to illustrate LinkedHashMap

LruCache summary:

  • LinkHashMap inherited from HashMap, and added the bidirectional linked list structure on the basis of HashMap. Each time the data is accessed, the linked list pointer of the accessed data will be updated. Specifically, the node will be deleted from the linked list first, and then added to the linked list header before. This ensures that the data before the list header node is most recently accessed (deleting data from the list does not actually delete data; it just moves the list pointer, and the data itself remains the same in the map).
  • LruCache uses LinkHashMap to access data internally. On the premise that bidirectional linked lists ensure the old and new order of data, a maximum memory is set. When data is put into LruCache, the oldest data is removed when the data reaches the maximum memory to ensure that the memory does not exceed the maximum value.

2.3.2 Disk Cache DiskLruCache

Rely on:

Implementation ‘com. Jakewharton: disklrucache: 2.0.2’

The implementation of DiskLruCache is similar to that of LruCache. When the total size of a file written to a hard disk exceeds the threshold, the system deletes the old file. Take a quick look at the remove operation:

Private final LinkedHashMap<String, Entry> lruEntries = new LinkedHashMap<String, Entry > (0, 0.75 f, true); . public synchronized boolean remove(String key) throws IOException { checkNotClosed(); validateKey(key); Entry entry = lruEntries.get(key); if (entry == null || entry.currentEditor ! = null) { return false; } for (int I = 0; i < valueCount; i++) { File file = entry.getCleanFile(i); // Run file.delete() to delete the cache file. If (file.exists() &&! file.delete()) { throw new IOException("failed to delete " + file); } size -= entry.lengths[i]; entry.lengths[i] = 0; }... return true; }Copy the code

You can see that DiskLruCache uses the same features of LinkHashMap, except that the entries in the array are changed, and Editor is used to manipulate files.

private final class Entry { private final String key; private final long[] lengths; private boolean readable; private Editor currentEditor; private long sequenceNumber; . }Copy the code

2.4 prevent OOM

The LruCache cache size set above can effectively prevent OOM, but when the image demand is large, it may need to set a relatively large cache, so that the probability of OOM occurrence will be increased, so we should explore other methods to prevent OOM.

Method 1: Soft reference

Review the four Java references:

  • Strong references: Ordinary variables are strong references, such asprivate Context context;
  • Soft application: SoftReference. Before an OOM occurs, the garbage collector retrieves an object referenced by SoftReference.
  • WeakReference: WeakReference. When GC occurs, garbage collector will recover objects in WeakReference.
  • Virtual references: Can be reclaimed at any time. There are no usage scenarios.

Strong references:

The timing of the collection of strongly referenced objects depends on the garbage collection algorithm, commonly known as the reachable analysis algorithm. When an Activity is destroyed, the Activity is disconnected from the GCRoot. It is safe to assume that the Activity object is created in the ActivityThread, and that the ActivityThread, which calls back to the Activity’s lifecycle, must hold the Activity reference. GCRoot is an ActivityThread. When the Activity executes onDestroy, the ActivityThread is disconnected from the Activity, and the Activity cannot reach GCRoot. So it is marked as recyclable by the garbage collector.

A SoftReference is intended for an OOM scenario. A large memory object, such as a Bitmap, can be configured with SoftReference to prevent an OOM result

private static LruCache<String, SoftReference<Bitmap>> mLruCache = new LruCache<String, SoftReference<Bitmap>>(10 * 1024){@override protected int sizeOf(String key, SoftReference<Bitmap> value) {// Default 1 is returned, If (value.get() == null){return 0; if (value.get() == null){return 0; } return value.get().getByteCount() /1024; }};Copy the code

LruCache stores SoftReference objects. If the memory is insufficient, the Bitmap will be reclaimed, that is, the Bitmap configured with SoftReference will not result in an OOM.

When the Bitmap is retrieved, the remaining size of the LruCache should be recalculated. You can write a method that when the Bitmap is empty, LruCache cleans up and recalculates the remaining memory.

Another problem is that when the Bitmap in the soft reference is reclaimed when there is insufficient memory, the LruCache becomes null and void, equivalent to the memory cache being invalid, which inevitably causes efficiency problems.

Method 2: onLowMemory

Activities and fragments are called when memory is insufficientonLowMemoryGlide uses this method to prevent the OOM.

//Glide
public void onLowMemory() {
    clearMemory();
}

public void clearMemory() {
    // Engine asserts this anyway when removing resources, fail faster and consistently
    Util.assertMainThread();
    // memory cache needs to be cleared before bitmap pool to clear re-pooled Bitmaps too. See #687.
    memoryCache.clearMemory();
    bitmapPool.clearMemory();
    arrayPool.clearMemory();
  }

Copy the code
Method 3: Consider the storage location of Bitmap pixels

As we know, the memory allocated by the system for each process, that is, each VIRTUAL machine, is limited. In the early days, 16M, 32M, and now 100+M, the virtual machine memory is divided into five parts:

  • The virtual machine stack
  • Local method stack
  • Program counter
  • Methods area
  • The heap

Objects are allocated in the heap, which is the largest chunk of memory in the JVM, and OOM is usually allocated in the heap.

The Bitmap occupies a large memory not because of the size of the object itself, but because of the pixel data of the Bitmap. The size of the pixel data of the Bitmap = width * height * 1 pixel occupies the memory.

How much memory does a pixel take up? Different formats of bitmaps have different memory footprint for each pixel. How much is the memory footprint? See the following definition code in Fresco

  /**
   * Bytes per pixel definitions
   */
  public static final int ALPHA_8_BYTES_PER_PIXEL = 1;
  public static final int ARGB_4444_BYTES_PER_PIXEL = 2;
  public static final int ARGB_8888_BYTES_PER_PIXEL = 4;
  public static final int RGB_565_BYTES_PER_PIXEL = 2;
  public static final int RGBA_F16_BYTES_PER_PIXEL = 8;
Copy the code

If the Bitmap is in RGB_565 format, 1 pixel takes up 2 bytes, and ARGB_8888 takes up 4 bytes. Take memory footprint into consideration when selecting a frame to load images. Less memory footprint means less OOM probability. Glide has half the memory cost of Picasso because the default Bitmap format is different.

As for width and height, it refers to the width and height of the Bitmap. See BitmapFactory. OutWidth Options

/** * The resulting width of the bitmap. If {@link #inJustDecodeBounds} is * set to false, this will be width of the output bitmap after any * scaling is applied. If true, it will be the width of the input image * without any accounting for scaling. * * <p>outWidth will be set to -1 if there  is an error trying to decode.</p> */ public int outWidth;Copy the code

Bitmapfactory. Options specifies true to inJustDecodeBounds, or false to scaled. So we can generally compress Bitmap pixels to reduce memory footprint.

Digression, the above analysis of Bitmap pixel data size calculation, just to illustrate why Bitmap pixel data is so large. Is it possible to put pixel data not in the Java heap, but in the Native heap? It is said that Android Bitmap pixel data is stored in Java heap between 3.0 and 8.0, while pixel data after 8.0 is stored in native heap. Is it true? Look at the source code to know ~

8.0 Bitmap

The Java layer creates the Bitmap method

public static Bitmap createBitmap(@Nullable DisplayMetrics display, int width, int height, @NonNull Config config, boolean hasAlpha, @NonNull ColorSpace colorSpace) { ... Bitmap bm; . if (config ! = Config. ARGB_8888 | | the colorSpace = = the colorSpace. Get (the colorSpace. Named. SRGB)) {/ / in the end by native method to create bm = nativeCreate (null, 0, width, width, height, config.nativeInt, true, null, null); } else { bm = nativeCreate(null, 0, width, width, height, config.nativeInt, true, d50.getTransform(), parameters); }... return bm; }Copy the code

Bitmaps are created using the native method of nativeCreate

Corresponding source 8.0.0 _r4 xref/frameworks/base/core/jni/android/graphics/Bitmap. The CPP

//Bitmap.cpp static const JNINativeMethod gBitmapMethods[] = { { "nativeCreate", "([IIIIIIZ[FLandroid/graphics/ColorSpace$Rgb$TransferParameters;)Landroid/graphics/Bitmap; ", (void*)Bitmap_creator }, ...Copy the code

JNI dynamic registration, nativeCreate method corresponds to Bitmap_creator;

//Bitmap.cpp static jobject Bitmap_creator(JNIEnv* env, jobject, jintArray jColors, jint offset, jint stride, jint width, jint height, jint configHandle, jboolean isMutable, jfloatArray xyzD50, jobject transferParameters) { ... / / 1. The application heap memory, create native layer Bitmap sk_sp < Bitmap > nativeBitmap = Bitmap: : allocateHeapBitmap (& Bitmap, NULL); if (! nativeBitmap) { return NULL; }... / / 2. Create a Java layer Bitmap return createBitmap (env, nativeBitmap. Release (), getPremulBitmapCreateFlags (isMutable)); }Copy the code

There are two main steps:

  1. Apply memory, create native layer Bitmap, take a lookallocateHeapBitmapmethods

    8.0.0 _r4 / xref/frameworks/base/libs/hwui hwui/Bitmap. The CPP
// static sk_sp<Bitmap> allocateHeapBitmap(size_t size, const SkImageInfo& info, size_t rowBytes, Void * addr = calloc(size, 1); if (! addr) { return nullptr; } return sk_sp<Bitmap>(new Bitmap(addr, size, info, rowBytes, ctable)); }Copy the code

It can be seen that a piece of memory space is applied for through the calloc function of c++, and then the native layer Bitmap object is created and the memory address is transmitted. That is, the Bitmap data (pixel data) of the native layer is stored in the native heap.

  1. Create a Java layer Bitmap
//Bitmap.cpp jobject createBitmap(JNIEnv* env, Bitmap* bitmap, int bitmapCreateFlags, jbyteArray ninePatchChunk, jobject ninePatchInsets, int density) { ... BitmapWrapper* bitmapWrapper = new BitmapWrapper(bitmap); // Call back to the Java layer via JNI, Jobject obj = env->NewObject(gBitmap_class, gBitmap_constructorMethodID, reinterpret_cast<jlong>(bitmapWrapper), bitmap->width(), bitmap->height(), density, isMutable, isPremultiplied, ninePatchChunk, ninePatchInsets); . return obj; }Copy the code

Env ->NewObject; gBitmap_class; gBitmap_constructorMethodID;

//Bitmap.cpp int register_android_graphics_Bitmap(JNIEnv* env) { gBitmap_class = MakeGlobalRefOrDie(env, FindClassOrDie(env, "android/graphics/Bitmap")); gBitmap_nativePtr = GetFieldIDOrDie(env, gBitmap_class, "mNativePtr", "J"); gBitmap_constructorMethodID = GetMethodIDOrDie(env, gBitmap_class, "<init>", "(JIIIZZ[BLandroid/graphics/NinePatch$InsetStruct;)V");  gBitmap_reinitMethodID = GetMethodIDOrDie(env, gBitmap_class, "reinit", "(IIZ)V");  gBitmap_getAllocationByteCountMethodID = GetMethodIDOrDie(env, gBitmap_class, "getAllocationByteCount", "()I");  return android::RegisterMethodsOrDie(env, "android/graphics/Bitmap", gBitmapMethods, NELEM(gBitmapMethods)); }Copy the code

Bitmap 8.0 was created with two points:

  1. Create native layer Bitmap and apply memory in native heap.
  2. JNI creates a Java layer Bitmap object that allocates memory in the Java heap.

Pixel data exists in the native layer Bitmap, that is, it is proved that the Bitmap pixel data of 8.0 exists in the Native heap.

7.0 Bitmap

The way to look directly at the native layer,

/ 7.0.0 _r31 xref/frameworks/base/core/jni/android/graphics/Bitmap. The CPP

Static const JNINativeMethod gBitmapMethods[] = {{"nativeCreate", "([IIIIIIZ)Landroid/graphics/Bitmap; ", (void*)Bitmap_creator }, ... static jobject Bitmap_creator(JNIEnv* env, jobject, jintArray jColors, jint offset, jint stride, jint width, jint height, jint configHandle, Jboolean isMutable) {... / / 1. Through this method to create native layer Bitmap Bitmap * nativeBitmap = GraphicsJNI: : allocateJavaPixelRef (env, &bitmap, NULL); ... return GraphicsJNI::createBitmap(env, nativeBitmap, getPremulBitmapCreateFlags(isMutable)); }Copy the code

Native layer Bitmap created by GraphicsJNI: : allocateJavaPixelRef, inside look at how allocation, GraphicsJNI implementation class is Graphics. The CPP

android::Bitmap* GraphicsJNI::allocateJavaPixelRef(JNIEnv* env, SkBitmap* bitmap, SkColorTable* ctable) { const SkImageInfo& info = bitmap->info(); size_t size; // Calculate the size of the space required if (! computeAllocationSize(*bitmap, &size)) { return NULL; } // we must respect the rowBytes value already set on the bitmap instead of // attempting to compute our own. const size_t rowBytes = bitmap->rowBytes(); // 1. Create an array JbyteArray arrayObj = (jbyteArray) env->CallObjectMethod(gVMRuntime, gVMRuntime_newNonMovableArray, gByte_class, size); . Jbyte * addr = (jbyte*) env->CallLongMethod(gVMRuntime, gVMRuntime_addressOf, arrayObj); . Android ::Bitmap* wrapper = new Android ::Bitmap(env, arrayObj, (void*) addr, info, rowBytes, ctable);  wrapper->getSkBitmap(bitmap); // since we're already allocated, we lockPixels right away // HeapAllocator behaves this way too bitmap->lockPixels(); return wrapper; }Copy the code

As you can see, the allocation of 7.0 pixels of memory looks like this:

  1. Create an array by calling the Java layer through JNI
  2. Then create a native layer Bitmap and pass in the address of the array.

Thus, 7.0 Bitmap pixel data is placed in the Java heap.

Of course, the Bitmap pixel memory below 3.0 is also said to be stored in the Native heap, but the Bitmap of the Native layer needs to be released manually, that is, the memory of the Native layer needs to be recycled by manually calling the recycle method. This you can go to see the source code verification.

Native layer Bitmap recovery problem

Java layer Bitmap object is automatically collected by garbage collector, and native layer Bitmap impression we do not need to manually collected, how to deal with the source code?

Remember an interview question is like this:

Talk about the relationship between final, finally and Finalize

Finalize is a method of Object class, and the annotation looks like this:

/** * Called by the garbage collector on an object when garbage collection * determines that there are no more references to the object. * A subclass overrides the {@code finalize} method to dispose of * system resources or to perform other cleanup. * <p> ... **/ protected void finalize() throws Throwable { }Copy the code

When the garbage collector confirms that there is no other reference to the object, the Finalize method of the object will be called. Subclasses can override the Finalize method and do some operations to release resources.

Before 6.0, Bitmaps used the Finalize method to release native layer objects. 6.0 Bitmap. Java

Bitmap(long nativeBitmap, byte[] buffer, int width, int height, int density, boolean isMutable, boolean requestPremultiplied, byte[] ninePatchChunk, NinePatch.InsetStruct ninePatchInsets) { ... mNativePtr = nativeBitmap; MFinalizer = new BitmapFinalizer(nativeBitmap); int nativeAllocationByteCount = (buffer == null ? getByteCount() : 0); mFinalizer.setNativeAllocationByteCount(nativeAllocationByteCount); } private static class BitmapFinalizer { private long mNativeBitmap; // Native memory allocated for the duration of the Bitmap, // if pixel data allocated into native memory, instead of java byte[] private int mNativeAllocationByteCount; BitmapFinalizer(long nativeBitmap) { mNativeBitmap = nativeBitmap; } public void setNativeAllocationByteCount(int nativeByteCount) { if (mNativeAllocationByteCount ! = 0) { VMRuntime.getRuntime().registerNativeFree(mNativeAllocationByteCount); } mNativeAllocationByteCount = nativeByteCount; if (mNativeAllocationByteCount ! = 0) { VMRuntime.getRuntime().registerNativeAllocation(mNativeAllocationByteCount); } } @Override public void finalize() { try { super.finalize(); } catch (Throwable t) { // Ignore } finally { //2. Is here, setNativeAllocationByteCount (0); nativeDestructor(mNativeBitmap); mNativeBitmap = 0; }}}Copy the code

The BitmapFinalizer class is created in the Bitmap constructor, and the Finalize method is overridden. When the Java layer Bitmap is collected, the BitmapFinalizer object is also collected, and the Finalize method must be called. Release native layer Bitmap objects inside.

After 6.0 changes were made, BitmapFinalizer was removed and replaced by the NativeAllocationRegistry.

For example, the 8.0 Bitmap constructor

Bitmap(long nativeBitmap, int width, int height, int density, boolean isMutable, boolean requestPremultiplied, byte[] ninePatchChunk, NinePatch.InsetStruct ninePatchInsets) { ... mNativePtr = nativeBitmap; long nativeSize = NATIVE_ALLOCATION_SIZE + getAllocationByteCount(); // Create NativeAllocationRegistry, NativeAllocationRegistry = new NativeAllocationRegistry( Bitmap.class.getClassLoader(), nativeGetNativeFinalizer(), nativeSize); registry.registerNativeAllocation(this, nativeBitmap); }Copy the code

NativeAllocationRegistry will not be analyzed. Both BitmapFinalizer and NativeAllocationRegistry are designed to recover bitmaps in the Java layer. Recycle the Native layer Bitmap as well. You don’t need to manually call the Recycle method; GC simply disks it.

The above analysis of the Bitmap pixel storage location, we know that after Android 8.0, the Bitmap pixel memory is placed in the Native heap, and the problem caused by THE Bitmap will not appear on the device above 8.0 (without memory leakage), how to do with the device below 8.0? Upgrade or change your phone

Sure, we can switch phones, but not everyone can keep up with Android updates, so it’s still a problem to solve

Fresco is unique in its ability to compete with Glide. The article begins by listing the advantages of Fresco: “On systems up to 5.0 (minimum 2.3), Fresco puts images into a special Ashmem area.” The Ashmem area is an anonymous shared memory, and Fresco puts Bitmap pixels into shared memory, which is native heap memory.

Fresco key source code in the PlatformDecoderFactory class

public class PlatformDecoderFactory { /** * Provide the implementation of the PlatformDecoder for the current platform using the provided * PoolFactory * * @param poolFactory The PoolFactory * @return The PlatformDecoder implementation */ public static PlatformDecoder buildPlatformDecoder( PoolFactory poolFactory, Boolean gingerbreadDecoderEnabled) {/ / 8.0 above with the decoder OreoDecoder if (Build) VERSION) SDK_INT > = Build. VERSION_CODES. O) { int maxNumThreads = poolFactory.getFlexByteArrayPoolMaxNumThreads(); return new OreoDecoder( poolFactory.getBitmapPool(), maxNumThreads, new Pools.SynchronizedPool<>(maxNumThreads)); } else if (build.version.sdk_int >= build.version_codes.lollipop) {// ArtDecoder int maxNumThreads = poolFactory.getFlexByteArrayPoolMaxNumThreads(); return new ArtDecoder( poolFactory.getBitmapPool(), maxNumThreads, new Pools.SynchronizedPool<>(maxNumThreads)); } else {the if (gingerbreadDecoderEnabled & & Build. VERSION. SDK_INT < Build. VERSION_CODES. KITKAT) {/ / is less than 4.4 GingerbreadPurgeableDecoder decoder return new GingerbreadPurgeableDecoder (); } else {/ / this is 4.4 to 5.0 with the decoder return new KitKatPurgeableDecoder (poolFactory. GetFlexByteArrayPool ()); }}}}Copy the code

8.0 don’t look at the first, see 4.4 below is how to get the Bitmap, see GingerbreadPurgeableDecoder this class has a method for obtaining Bitmap

//GingerbreadPurgeableDecoder private Bitmap decodeFileDescriptorAsPurgeable( CloseableReference<PooledByteBuffer> BytesRef, int inputLength, byte[] suffix, bitmapFactory. Options Options) {// MemoryFile: Anonymous shared memory MemoryFile MemoryFile = null; MemoryFile = copyToMemoryFile(bytesRef, inputLength, suffix); FileDescriptor fd = getMemoryFileDescriptor(memoryFile); if (mWebpBitmapFactory ! = null) {/ / create the Bitmap, Fresco "himself wrote a set of method to create Bitmap Bitmap Bitmap. = mWebpBitmapFactory decodeFileDescriptor (fd, null, options);  return Preconditions.checkNotNull(bitmap, "BitmapFactory returned null"); } else { throw new IllegalStateException("WebpBitmapFactory is null"); }}}Copy the code

To recap, below 4.4, Fresco uses anonymous shared memory to store Bitmap data, first copying the image data into the anonymous shared memory, then using Fresco’s own methods for loading bitmaps.

Fresco uses different methods to load bitmaps for different Android versions. For 4.4-5.0, 5.0-8.0, and above, you can use the PlatformDecoderFactory class to start your own analysis. Considering why there are so many decoders for different platforms, is it not good to use anonymous shared memory below 8.0? I look forward to hearing from you in the comments section

2.5 ImageView memory leaks

Once in Vivo resident development, the page with avatar function was detected memory leak, the reason is that there is a method of loading network avatar in SDK, which holds ImageView reference.

Of course, the modification is also relatively simple and crude, the ImageView with WeakReference modification is done.

In fact, this approach solves the memory leak problem, but it is not perfect. For example, when the interface exits, we want the ImageView to be reclaimed, and the loading image task to be cancelled, and the unfinished task to be removed.

Glide does this by listening for life cycle callbacks and looking at the RequestManager class

public void onDestroy() { targetTracker.onDestroy(); for (Target<? > target: targetTracker.getall ()) {// Clear task clear(target); } targetTracker.clear(); requestTracker.clearRequests(); lifecycle.removeListener(this); lifecycle.removeListener(connectivityMonitor); mainHandler.removeCallbacks(addSelfToLifecycle); glide.unregisterRequestManager(this); }Copy the code

In the Activity/fragment destruction, cancel the image loading task, details you can go to see the source code.

2.6 List loading Problem

Image disorder

Due to RecyclerView or LIstView reuse mechanism, when the network loading image started, ImageView was the first item, after loading successfully, ImageView may run to the 10th item due to reuse. Showing a picture of the first item on the tenth item is definitely wrong.

The usual practice is to set a tag to the ImageView. The tag is usually the address of the image. Before updating the ImageView, determine whether the tag matches the URL.

Of course, you can cancel the image loading task when the item disappears from the list. Consider whether it’s better to put it in the image loading frame or in the UI.

Too many thread pool tasks

List sliding, there’s a lot of image requests, if it’s the first time in, there’s no cache, then there’s a lot of tasks waiting in the queue. Therefore, before requesting the network picture, it is necessary to determine whether the task already exists in the queue. If it does, it will not be added to the queue.

conclusion

This paper analyzes the necessary requirements of a picture loading framework by Glide, and what technologies and principles are involved in each requirement.

  • Asynchronous loading: minimum of two thread pools
  • Switch to the main thread: Handler
  • Cache: LruCache, DiskLruCache, related to LinkHashMap principle
  • Prevent OOM: soft reference, LruCache, image compression, Bitmap pixel storage location source analysis, Fresco part of the source analysis
  • Memory leak: Pay attention to proper reference of ImageView, lifecycle management
  • Sliding list loading problems: loading errors with tag, team full task does not add

Reprinted by: Master Blue Link: juejin.cn/post/684490…