preface

Since the previous project is the MVP architecture, which is a combination of RxJava + Glide + OKHttp + Retrofit and other open source frameworks, it is also staying at the level of use before, without in-depth research. Recently, I plan to attack all of them. Students who have not paid attention to them can pay attention to a wave first. By the end of this series, you’ll be much more comfortable dealing with problems (whether in an interview or on the job) if you know how.

Android image loading framework Glide 4.9.0 (a) from the point of view of source code Glide execution process analysis

Android image loading framework Glide 4.9.0 (2) from the perspective of source code Glide cache strategy analysis

Analyze the basic execution flow and thread switching principle of Rxjava2 from the point of view of source code

From the point of view of source code analysis OKHttp3 (a) synchronous, asynchronous execution process

Analyze the charm of OKHttp3 (2) interceptor from the perspective of source code

Analyze OKHttp3 (3) cache strategy from the perspective of source code

Analyze Retrofit network request from the perspective of source code, including RxJava + Retrofit + OKhttp network request execution process

introduce

In the last article, we know the Glide framework of the most basic implementation process, so only know the basic implementation process, this is obviously not enough, we have to dig into the details of Glide framework processing principle, such as the caching mechanism, image processing, this article we will explore the Glide cache mechanism together.

Glide cache mechanism can be said to be very perfect design, very considerate, the following is a table to explain Glide cache.

Cache type The cache on behalf of instructions
Activity in the cache ActiveResources If the current corresponding image resource was obtained from the memory cache, the image is stored in the active resource.
Memory cache LruResourceCache When the image has been parsed and recently loaded, it is put into memory
Disk cache – Resource type DiskLruCacheWrapper The decoded picture is written to a disk file
Disk cache – Raw data DiskLruCacheWrapper The original data is cached on disk after the network request succeeds

If you don’t understand the Glide execution process can first see the Android image loading framework Glide 4.9.0 (a) from the point of view of source code analysis of the simplest execution process

Before introducing caching principles, let’s take a look at the load cache execution order to get an impression.

Cache key generation

Glide Cache key is generated in memory cache or disk cache. From the previous source code loading process, we know that the Engine load function to generate key. Let’s look at it in code.

public class Engine implements EngineJobListener.MemoryCache.ResourceRemovedListener.EngineResource.ResourceListener {...public synchronized <R> LoadStatus load(
      GlideContext glideContext,
      Object model,
      Key signature,
      int width,
      intheight, Class<? > resourceClass, Class<R> transcodeClass, Priority priority, DiskCacheStrategy diskCacheStrategy, Map<Class<? >, Transformation<? >> transformations,boolean isTransformationRequired,
      boolean isScaleOnlyOrNoTransform,
      Options options,
      boolean isMemoryCacheable,
      boolean useUnlimitedSourceExecutorPool,
      boolean useAnimationPool,
      boolean onlyRetrieveFromCache,
      ResourceCallback cb,
      Executor callbackExecutor) {...//1. Generate cache unique key value, model is the image addressEngineKey key = keyFactory.buildKey(model, signature, width, height, transformations, resourceClass, transcodeClass, options); . }... }/ / to generate the key
  EngineKey buildKey(Object model, Key signature, int width, intheight, Map<Class<? >, Transformation<? >> transformations, Class<? > resourceClass, Class<? > transcodeClass, Options options) {
    return new EngineKey(model, signature, width, height, transformations, resourceClass,
        transcodeClass, options);
  }
Copy the code
class EngineKey implements Key {...@Override
  public boolean equals(Object o) {
    if (o instanceof EngineKey) {
      EngineKey other = (EngineKey) o;
      return model.equals(other.model)
          && signature.equals(other.signature)
          && height == other.height
          && width == other.width
          && transformations.equals(other.transformations)
          && resourceClass.equals(other.resourceClass)
          && transcodeClass.equals(other.transcodeClass)
          && options.equals(other.options);
    }
    return false;
  }

  @Override
  public int hashCode(a) {
    if (hashCode == 0) {
      hashCode = model.hashCode();
      hashCode = 31 * hashCode + signature.hashCode();
      hashCode = 31 * hashCode + width;
      hashCode = 31 * hashCode + height;
      hashCode = 31 * hashCode + transformations.hashCode();
      hashCode = 31 * hashCode + resourceClass.hashCode();
      hashCode = 31 * hashCode + transcodeClass.hashCode();
      hashCode = 31 * hashCode + options.hashCode();
    }
    returnhashCode; }... }Copy the code

You can see from the comments and code how many parameters are passed in, mainly in terms of URL, signature, width and height, which is internally overwritten with hashCode, equals, to ensure that the object is unique.

Memory cache

Glide turns on memory cache for us by default so we don’t need to call skipMemoryCache

// Memory caching is enabled by default in the BaseRequestOptions member variable.
private boolean isCacheable = true;

// Call layer call
Glide.
      with(MainActivity.this.getApplication()).
      // Enable using memory cache
      skipMemoryCache(true).
      into(imageView);
Copy the code

If the active cache is not loading the memory cache, take a look at the code; if the active cache is not loading the memory cache, take a look at the code.

public class Engine implements EngineJobListener.MemoryCache.ResourceRemovedListener.EngineResource.ResourceListener {...public synchronized <R> LoadStatus load(... / / parameters) {...//1. Generate cache unique key value, model is the image address
     EngineKey key = keyFactory.buildKey(model, signature, width, height, transformations,
        resourceClass, transcodeClass, options);  
      
     //2. Prioritize loading the active cache in memory - ActiveResourcesEngineResource<? > active = loadFromActiveResources(key, isMemoryCacheable);if(active ! =null) {
      cb.onResourceReady(active, DataSource.MEMORY_CACHE);
      if (VERBOSE_IS_LOGGABLE) {
        logWithTimeAndKey("Loaded resource from active resources", startTime, key);
      }
      return null;
    }
		
   	//3. If not in the active cache, load the resource data in the LRU memory cache.EngineResource<? > cached = loadFromCache(key, isMemoryCacheable);if(cached ! =null) {
      cb.onResourceReady(cached, DataSource.MEMORY_CACHE);
      if (VERBOSE_IS_LOGGABLE) {
        logWithTimeAndKey("Loaded resource from cache", startTime, key);
      }
      return null; }... }Copy the code

Why does Glide have 2 memory caches

Do you know why Glide has 2 memory caches (one Map + weak reference, one LRU memory cache)? Have you ever wondered why? ActiveResources is a HashMap of weak references that can be used to cache the images that are in use. At first, when I saw this sentence, I really didn’t understand, because Lru is the least recent to recycle the end data, so here ActiveResources to cache the image in use, The more I thought about it, the more I felt conflicted. Later, when I had lunch at noon, I suddenly thought of a situation. I don’t know whether it is true or not.

Detailed examples: For example, we set the SIZE of Lru memory cache to install 99 pictures. When we slide RecycleView, if we just slide to 100, we will recycle the first one we have loaded out. At this time, if we slide back to the first one, we will rejudge whether there is a memory cache. If not, we’ll open a new Request, which is obviously not what we want if we clean up the first image. So when resource data is retrieved from the memory cache, it is actively added to the active resource, and the memory cache resource is cleared. The obvious advantage of doing this is to protect the images that do not want to be recycled from the LruCache algorithm, which makes full use of resources. I also don’t know whether this understanding is correct. I hope if there are other meanings, please let me know. Thank you!

Glide has two memory caches, so let’s talk about the storage/fetching/deleting of these caches.

ActiveResources Activity resources

Access to activity resources

As we know from the previous introduction, the active cache is obtained in Engine Load

public synchronized <R> LoadStatus load(... / / parameters) {
      
     //1. Prioritize loading the active cache in memory - ActiveResourcesEngineResource<? > active = loadFromActiveResources(key, isMemoryCacheable);if(active ! =null) {
      // If the resource is found, return to the upper layer
      cb.onResourceReady(active, DataSource.MEMORY_CACHE);
      return null; }... }@Nullable
  privateEngineResource<? > loadFromActiveResources(Key key,boolean isMemoryCacheable) {
    if(! isMemoryCacheable) {return null;
    }
 		// Get the cache of the active resource through the active resource get functionEngineResource<? > active = activeResources.get(key);if(active ! =null) {
      // In use, reference count +1
      active.acquire();
    }

    return active;
  }

Copy the code

Let’s move on to the actual implementation of GET

final class ActiveResources {
 
  //
  @VisibleForTesting
  final Map<Key, ResourceWeakReference> activeEngineResources = new HashMap<>();
  private finalReferenceQueue<EngineResource<? >> resourceReferenceQueue =newReferenceQueue<>(); .// External call get to get the active resource cache
  @Nullable
  synchronizedEngineResource<? > get(Key key) {// Through HashMap + WeakReference storage structure
    // Get the active cache via the Get function of the HashMap
    ResourceWeakReference activeRef = activeEngineResources.get(key);
    if (activeRef == null) {
      return null; } EngineResource<? > active = activeRef.get();if (active == null) {
      cleanupActiveReference(activeRef);
    }
    return active;
  }

  // Inherits WeakReference from WeakReference to avoid memory leakage.
    @VisibleForTesting
  static final class ResourceWeakReference extends WeakReference<EngineResource<? >>{... }Copy the code

Through the above code we know that the active cache is maintained by Map + WeakReference, the advantage of doing so is to avoid memory leak of image resources.

Storing active resources

As mentioned in the table at the beginning of this article, the active resource is stored after the memory resource is loaded, so let’s take a look at when the memory resource is loaded.

Again, in the Engine Load function

		//1. If not in the active cache, load the resource data in the LRU memory cache.EngineResource<? > cached = loadFromCache(key, isMemoryCacheable);if(cached ! =null) {
      cb.onResourceReady(cached, DataSource.MEMORY_CACHE);
      if (VERBOSE_IS_LOGGABLE) {
        logWithTimeAndKey("Loaded resource from cache", startTime, key);
      }
      return null;
    }

	// Load image resources in memory
  privateEngineResource<? > loadFromCache(Key key,boolean isMemoryCacheable) {
    if(! isMemoryCacheable) {return null;
    }
		//2. Get the memory cache and internally remove the current key cacheEngineResource<? > cached = getEngineResourceFromCache(key);if(cached ! =null) {
      // 3. If the memory cache is not empty, reference the counter +1
      cached.acquire();
      //4. Add to the active cache
      activeResources.activate(key, cached);
    }
    return cached;
  }
Copy the code

From comment 4, we know that when we get the memory cache, we first remove the memory cache’s current key and then add it to the active cache.

Clean up active resources

Through the analysis of the simplest execution process from the perspective of source code in Glide 4.9.0, we know that we issue the notification callback in EngineJob, informing Engine onResourceReleased to delete the active resource. To review the execution process, let’s start with the EngineJob notification:

class EngineJob<R> implements DecodeJob.Callback<R>,
    Poolable {...@Override
  public void onResourceReady(Resource<R> resource, DataSource dataSource) {
    synchronized (this) {
      this.resource = resource;
      this.dataSource = dataSource;
    }
    notifyCallbacksOfResult();
  }


  @Synthetic
  void notifyCallbacksOfResult(a) {...// The upper-layer Engine callback task is complete
    listener.onEngineJobComplete(this, localKey, localResource);

    // Iterate over the resource callback to ImageViewTarget and display
    for (final ResourceCallbackAndExecutor entry : copy) {
      entry.executor.execute(new CallResourceReady(entry.cb));
    }
    // Here is the notification sent by the upper layer to delete the active resource datadecrementPendingCallbacks(); }... }// This is the point
  @Synthetic
  synchronized void decrementPendingCallbacks(a) {
    if (decremented == 0) {
      if(engineResource ! =null) {
        		// If it is not empty, then the internal release function is calledengineResource.release(); }}}Copy the code

Take a look at the release function of EngineResource

class EngineResource<Z> implements Resource<Z> {...void release(a) {
    synchronized (listener) {
      synchronized (this) {
        if (acquired <= 0) {
          throw new IllegalStateException("Cannot release a recycled or not yet acquired resource");
        }
        // The internal reference count is subtracted by one every time the release is called, and the upper layer is notified when there is no reference, which is zero
        if (--acquired == 0) {
          // Call back, Engine to receive
          listener.onResourceReleased(key, this); }}}}... }Copy the code

From the comments, we know that reference counting is being used here, which is kind of a shadow of the reference counting that GC collects. In other words, when you have not used the image at all, the active resource will be cleared and the Engine’s onResourceReleased function will be called.

public class Engine implements EngineJobListener.MemoryCache.ResourceRemovedListener.EngineResource.ResourceListener {...// Receive the call callback from EngineResource
  @Override
  public synchronized void onResourceReleased(Key cacheKey, EngineResource
        resource) {
   //1. Receive the current image is not referenced, clean up the image resources
    activeResources.deactivate(cacheKey);
   //2. If memory cache is enabled
    if (resource.isCacheable()) {
      //3. Store the cache in the in-memory cache.
      cache.put(cacheKey, resource);
    } else{ resourceRecycler.recycle(resource); }}... }Copy the code

As you can see from note 1 above, the cached image is first removed from activeResources and then put into the LruResourceCache memory cache. In this way, the image in use is used to cache weak references, and the image in use is not used to cache the function of LruCache. The design is really clever.

LruResourceCache Indicates memory resources

Obtaining memory Resources

In fact, when talking about active resource storage, it has involved memory cache storage. Let’s take a look at the specific code as follows:

public class Engine implements EngineJobListener.MemoryCache.ResourceRemovedListener.EngineResource.ResourceListener {...// Ignore some member variables and constructors
     public synchronized <R> LoadStatus load(... // Ignore parameters. {...//1. Load the memory cacheEngineResource<? > cached = loadFromCache(key, isMemoryCacheable);if(cached ! =null) {
      // If there is one in the memory cache, the upper layer is notified, and finally received in SingleRequest
      cb.onResourceReady(cached, DataSource.MEMORY_CACHE);
      if (VERBOSE_IS_LOGGABLE) {
        logWithTimeAndKey("Loaded resource from cache", startTime, key);
      }
      return null; }}... }privateEngineResource<? > loadFromCache(Key key,boolean isMemoryCacheable) {
    if(! isMemoryCacheable) {return null;
    }
		/ / 2. Through getEngineResourceFromCache access memory resourcesEngineResource<? > cached = getEngineResourceFromCache(key);if(cached ! =null) {// If the memory resource exists
      // The reference count is +1
      cached.acquire();
      //3. Store memory resources into active resources
      activeResources.activate(key, cached);
    }
    return cached;
  }


  privateEngineResource<? > getEngineResourceFromCache(Key key) {// Engine load is a cache of Lru memory resources
    //2.1 Here is the use of remove to get cache resourcesResource<? > cached = cache.remove(key);finalEngineResource<? > result;if (cached == null) {
      result = null;
    } else if (cached instanceofEngineResource) { result = (EngineResource<? >) cached; }else {
      result = new EngineResource<>(cached, true /*isMemoryCacheable*/.true /*isRecyclable*/);
    }
    return result;
  }
Copy the code

Note 1 tells us that this is the start of loading resources in the memory cache;

As you can see from comment 2.1, the memory cache is obtained by remove of the Lru memory cache

Finally, the memory is stored in the active cache and the currently found memory cache is cached.

Here the active cache is closely linked to the memory cache, as we wondered earlier why Glide has two memory caches.

Storing memory resources

Using EngineResource’s reference-counting mechanism the release function calls back whenever there is no reference. See the following code:

class EngineJob<R> implements DecodeJob.Callback<R>,
    Poolable {...@Override
  public void onResourceReady(Resource<R> resource, DataSource dataSource) {
    synchronized (this) {
      this.resource = resource;
      this.dataSource = dataSource;
    }
    notifyCallbacksOfResult();
  }


  @Synthetic
  void notifyCallbacksOfResult(a) {...// Here is the notification sent by the upper layer to delete the active resource datadecrementPendingCallbacks(); }... }// This is the point
  @Synthetic
  synchronized void decrementPendingCallbacks(a) {
    if (decremented == 0) {
      if(engineResource ! =null) {
        		// If it is not empty, then the internal release function is calledengineResource.release(); }}}void release(a) {
    synchronized (listener) {
      synchronized (this) {
        // Call back when the reference count is 0.
        if (--acquired == 0) {
          listener.onResourceReleased(key, this); }}}}Copy the code

This will call back to the Engine’s onResourceReleased function:

  @Override
  public synchronized void onResourceReleased(Key cacheKey, EngineResource
        resource) {
    //1. Clear the active cache
    activeResources.deactivate(cacheKey);
    // If memory cache is enabled
    if (resource.isCacheable()) {
      //2. Clean up the active resources and add them to the memory cache
      cache.put(cacheKey, resource);
    } else{ resourceRecycler.recycle(resource); }}Copy the code

At this point we know that the memory cache was added when the active cache was cleared.

Clearing the memory cache

The clearing here is to get the memory cache by removing the time to clean up, detailed can see the way to get the memory cache.

Summary of memory cache

From the above analysis, memory caches have active caches and memory resource caches. Here is a diagram to summarize how they interact with each other and exchange data.

Summarize the next steps:

  1. The active cache was not obtained on the first load.
  2. Then load the memory resource cache, clean up the memory cache first, and add the active cache.
  3. The second load active cache already exists.
  4. When the current image reference is 0, the active resource is cleared and added to the memory resource.
  5. It goes back to the first step, and it goes back to the next step.

Disk cache

Let’s take a look at a table before introducing disk caching

Cache said instructions
DiskCacheStrategy.NONE Indicates that disk caching is not enabled
DiskCacheStrategy.RESOURCE Indicates that only converted images are cached.
DiskCacheStrategy.ALL Indicates that both the original image and the converted image are cached.
DiskCacheStrategy.DATA Indicates that only the original images are cached
DiskCacheStrategy.AUTOMATIC Automatically select a disk caching policy based on the data source (default)

In the above four parameters is easy to understand, actually need to remember that there is a concept, is when we use the Glide to load an image, Glide default would not original image display, but the will to compress images and transformation, anyhow is after a series of operations through a variety of images, is called the image after the transformation. And Glide by default in the hard disk cache is DiskCacheStrategy. AUTOMATIC

To enable disk caching:

Glide.
      with(MainActivity.this.getApplication()).
      // Use disk resource caching
      diskCacheStrategy(DiskCacheStrategy.RESOURCE).
      into(imageView);
Copy the code

Now that you know how to turn it on, let’s take a look at the disk cache loading and storage.

The two loading processes are almost identical, except for different data sources. Let’s take a look at the details below

DiskCacheStrategy. The RESOURCE the RESOURCE type

Obtaining resource Data

If no data is found in the active or memory cache, restart a GlideExecutor thread pool to execute a new request in the DecodeJob Run function. Follow it to find the load of resource data:

class DecodeJob<R> implements DataFetcherGenerator.FetcherReadyCallback.Runnable.Comparable<DecodeJob<? > >,Poolable {...@Override
  public void run(a) {...try {
      // If cancelled, notify the loading failure
      if (isCancelled) {
        notifyFailed();
        return;
      }
      / / 1. Perform runWrapped
      runWrapped();
    } catch(CallbackException e) { ... }}... }private void runWrapped(a) {
    switch (runReason) {
      case INITIALIZE:
        //2. Find the execution status
        stage = getNextStage(Stage.INITIALIZE);
        //3. Find the specific actuator
        currentGenerator = getNextGenerator();
        //4. Start the execution
        runGenerators();
        break; . }}private DataFetcherGenerator getNextGenerator(a) {
    switch (stage) {
      case RESOURCE_CACHE: //3.1 Decoded resource actuator
        return new ResourceCacheGenerator(decodeHelper, this);
      case DATA_CACHE:// Raw data actuator
        return new DataCacheGenerator(decodeHelper, this);
      case SOURCE:// New request, HTTP executor
        return new SourceGenerator(decodeHelper, this);
      case FINISHED:
        return null;
      default:
        throw new IllegalStateException("Unrecognized stage: "+ stage); }}Copy the code

From the code and comments analyzed above, we know that we are looking for the specific actuator, and when we are done with comment 4, we start to execute. Now let’s look directly at comment 4. Note 3.1 because we have configured the RESOURCE disk RESOURCE caching policy externally, the ResourceCacheGenerator executor is directly found.

  private void runGenerators(a) {
    / / if the current task is not cancelled, actuator is not null, then execute currentGenerator. StartNext () function
    while(! isCancelled && currentGenerator ! =null
        && !(isStarted = currentGenerator.startNext())) {
      stage = getNextStage(stage);
      currentGenerator = getNextGenerator();
      if (stage == Stage.SOURCE) {
        reschedule();
        return; }}.. }Copy the code

By the above code, the main is to perform currentGenerator. StartNext () code, currentGerator is an interface, We know from comment 3.1 that its implementation class is ResourceCacheGenerator, so let’s take a look at the startNext function of ResourceCacheGenerator.

class ResourceCacheGenerator implements DataFetcherGenerator.DataFetcher.DataCallback<Object> {...@Override
  public boolean startNext(a) {...while (modelLoaders == null| |! hasNextModelLoader()) { resourceClassIndex++; .//1. Get the resource cache key
      currentKey =
          new ResourceCacheKey(// NOPMD AvoidInstantiatingObjectsInLoops
              helper.getArrayPool(),
              sourceId,
              helper.getSignature(),
              helper.getWidth(),
              helper.getHeight(),
              transformation,
              resourceClass,
              helper.getOptions());
      //2. Obtain the resource cache using the key
      cacheFile = helper.getDiskCache().get(currentKey);
      if(cacheFile ! =null) {
        sourceKey = sourceId;
        modelLoaders = helper.getModelLoaders(cacheFile);
        modelLoaderIndex = 0;
      }
    }

		loadData = null;
    boolean started = false;
    while(! started && hasNextModelLoader()) {//3. Get a data loaderModelLoader<File, ? > modelLoader = modelLoaders.get(modelLoaderIndex++);//3.1 For the resource cache file, build a loader, which is built to be the inner class ByteBufferFetcher of ByteBufferFileLoader
      loadData = modelLoader.buildLoadData(cacheFile,
          helper.getWidth(), helper.getHeight(), helper.getOptions());
      if(loadData ! =null && helper.hasLoadPath(loadData.fetcher.getDataClass())) {
        started = true;
        //3.2 Use ByteBufferFetcher to load, and finally the result will be called back to the DecodeJob onDataFetcherReady function
        loadData.fetcher.loadData(helper.getPriority(), this); }}... }Copy the code

A few things can be learned from the comments above

  1. First, according to the resource ID and other information to get the resource cache Key
  2. Get the cache file by key
  3. Build a ByteBufferFetcher to load the cache file
  4. When the load is complete, it is called back to the DecodeJob.

Storage Resource Data

Let’s start with the following code:

class DecodeJob<R> implements DataFetcherGenerator.FetcherReadyCallback.Runnable.Comparable<DecodeJob<? > >,Poolable {...private void notifyEncodeAndRelease(Resource<R> resource, DataSource dataSource) {... stage = Stage.ENCODE;try {
      //1. Whether the converted image can be cached
      if (deferredEncodeManager.hasResourceToEncode()) {
        //1.1 Cache entrydeferredEncodeManager.encode(diskCacheProvider, options); }}finally{... } onEncodeComplete(); }}void encode(DiskCacheProvider diskCacheProvider, Options options) {
      GlideTrace.beginSection("DecodeJob.encode");
      try {
        //1.2 Cache Bitmap to resource disk
        diskCacheProvider.getDiskCache().put(key,
            new DataCacheWriter<>(encoder, toEncode, options));
      } finally{ toEncode.unlock(); GlideTrace.endSection(); }}Copy the code

From the above, we know that the HTTP request to the image input stream after a series of processing, transformation to get the target Bitmap resources, and finally through the callback to the DecodeJob for caching.

Clearing the resource cache

  1. The user clears the system voluntarily
  2. Uninstall software
  3. Call DisCache. The clear ();

Diskcachestrategy.data Raw DATA type

Get raw data

Reference section on DiskCacheStrategy. RESOURCE access to resources, the difference is change ResourceCacheGenerator to DataCacheGenerator loaded.

Storing raw data

Since we have the raw data here, let’s start with the response data immediately after the HTTP request. We know from the previous article that we asked the network in HttpUrlFetcher, and directly locate the destination:

public class HttpUrlFetcher implements DataFetcher<InputStream> {
  
    @Override
  public void loadData(@NonNull Priority priority,
      @NonNull DataCallback<? super InputStream> callback) {
    long startTime = LogTime.getLogTime();
    try {
      //1. LoadDataWithRedirects HTTP requests to return InputStream
      InputStream result = loadDataWithRedirects(glideUrl.toURL(), 0.null, glideUrl.getHeaders());
      //2. Return the data after the request
      callback.onDataReady(result);
    } catch (IOException e) {
		  ...
    } finally{... }}}Copy the code

According to the comment, this is mainly used for network requests, and the request response data is called back to the MultiModelLoader. Let’s see how it works:

class MultiModelLoader<Model.Data> implements ModelLoader<Model.Data> {...@Override
    public void onDataReady(@Nullable Data data) {
    // If the data is not empty, then call back to the SourceGenerator
      if(data ! =null) {
        callback.onDataReady(data);
      } else{ startNextOrFail(); }}... }Copy the code

The callback here refers to the SourceGenerator and continues with

class SourceGenerator implements DataFetcherGenerator.DataFetcher.DataCallback<Object>,
    DataFetcherGenerator.FetcherReadyCallback {...@Override
  public void onDataReady(Object data) {
    DiskCacheStrategy diskCacheStrategy = helper.getDiskCacheStrategy();
    if(data ! =null && diskCacheStrategy.isDataCacheable(loadData.fetcher.getDataSource())) {
      //1. After receiving the original image data downloaded from the network, assign the value to the member variable dataToCache
      dataToCache = data;
      / / 2. To EngineJob
      cb.reschedule();
    } else{ cb.onDataFetcherReady(loadData.sourceKey, data, loadData.fetcher, loadData.fetcher.getDataSource(), originalKey); }}... }Copy the code

Cb.reschedule (); Finally, call back to the EngineJob class, which executes reschedule(DecodeJob
job) function getActiveSourceExecutor().execute(job); The thread pool is used to perform the task, and finally it goes back to the run function of the DecodeJob to get the DataCacheGenerator to the startNext() function of the SourceGenerator. I won’t post the process code before, I’ve talked about it many times. StartNext () function startNext() function startNext()

class SourceGenerator implements DataFetcherGenerator.DataFetcher.DataCallback<Object>,
    DataFetcherGenerator.FetcherReadyCallback {

  /** This temporary variable is the original data from the HTTP request */
  private Object dataToCache;


@Override
  public boolean startNext(a) {...if(dataToCache ! =null) {
      Object data = dataToCache;
      dataToCache = null;
      // Put it in the cachecacheData(data); }... }return started;
  }


  private void cacheData(Object dataToCache) {
    long startTime = LogTime.getLogTime();
    try {
      Encoder<Object> encoder = helper.getSourceEncoder(dataToCache);
    
      DataCacheWriter<Object> writer =
          new DataCacheWriter<>(encoder, dataToCache, helper.getOptions());
      originalKey = new DataCacheKey(loadData.sourceKey, helper.getSignature());
      // Store raw data
      // Write a file through StreamEncoder encode
      helper.getDiskCache().put(originalKey, writer);
    
    } finally {
      loadData.fetcher.cleanup();
    }

    sourceCacheGenerator =
        new DataCacheGenerator(Collections.singletonList(loadData.sourceKey), helper, this);
  }
Copy the code

According to the above code, the original data is written to the file.

Clearing the resource cache

  1. The user clears the system voluntarily
  2. Uninstall software
  3. Call DisCache. The clear ();

Disk cache section

storage

  1. The resource cache is only cached after the image is converted.
  2. Raw data is written to the cache after the network request succeeds;

To obtain

  1. The resource cache and raw data are checked in the GlideExecutor thread pool and in the Decodejob for obtaining data.

Let’s look at a flow chart here

Reuse pool

The reuse pool also plays a big role in Glide, and I won’t post the code here, because this is easy to understand, you can go to Glide Downsample for more details. Here I will briefly talk about the processing of the reuse pool in Glide.

In Glide, each time an image is parsed as a Bitmap, either in memory cache or disk cache, a reusable Bitmap is looked up from its BitmapPool and then cached in memory.

Note: When using the reuse pool, if there are images that can be reused, the memory of the image will be reused. So reuse does not reduce the amount of memory the program is using. Bitmap multiplexing reduces the performance (jitter, fragmentation) problems caused by frequent memory allocation.

conclusion

Finally, let’s conclude with a thought map:

It can be seen that Glide performance optimization can be said to reach the extreme, not only designed multi-level complex cache strategy, even the costly Bitmap memory is also managed by using the reuse pool, so even if the user does not open all the cache, Bitmap also ensures the reasonable use of memory, avoid OOM. As far as possible to reduce the object creation overhead, to ensure Glide loading smooth.

Glide 4.9.0 version of the cache mechanism here has been finished, I believe that after reading, you will have a certain understanding of Glide cache mechanism.

Thank you for your reading. If there are any mistakes in the article, please point them out. Thank you!

reference

  • Guo Lin god Glide source analysis