What is Glide?

Glide is an Android image loading and caching library that focuses on smooth loading of large numbers of images. Bumptech is a Google recommended image-loading library by BumpTech. The library has been used extensively in Google’s open source projects, including the official App unveiled at Google I/O in 2014.

Introduction to the

WIKI address: the official website of WIKI

Making address: dead simple

The characteristics of

1. Diversified media loading

Glide is more than just an image cache, it supports Gif, WebP, and more

2. Lifecycle integration

It is possible to bind more efficiently using Glide’s approach, which allows for more dynamic management of the life cycle of image loading requests

3. Efficient caching strategy

  • Memory and Disk image caching is supported
  • Caches the size of the image based on the size of the ImageView
  • The default Bitmap format is PREFER_ARGB_8888_DISALLOW_HARDWARE, and the default Bitmap format is PREFER_ARGB_8888_DISALLOW_HARDWARE.
  • Reuse a Bitmap using a BitmapPool

4, provide rich picture conversion Api, support circular clipping, smooth display and other features

How does Glide work?

1, gradle import library, implementation ‘com. Making. Bumptech. Glide: glide: 4.7.1’

2. Configure Glide with load apply into and other methods

Public void loadImageView(ImageView view,String URL){public void loadImageView(ImageView view,String URL){public void loadImageView(ImageView View,String URL){public void loadImageView(ImageView View,String URL){public void loadImageView(ImageView View,String URL) .placeholder(r.map.ic_launcher) // Error map. Error (r.map.ic_launcher) // Specify the size of the image. Override (1000,800) // Set the image size type to fitCenter (equal scale image, width or height equal to ImageView width or height). FitCenter () // Specify the image's zoom type as centerCrop (scale the image at equal proportions until the image's height is greater than or equal to the width of the ImageView, then cut off the middle display). .centerCrop().circlecrop ()// Specify the zoom type of the image as centerCrop(circle)// skip memory caches.true) // Cache ALL versions of images. DiskCacheStrategy (diskCacheStrategy.all) // Skip the disk cache. DiskCacheStrategy (diskCacheStrategy.none) / / cache only the original high-resolution image. DiskCacheStrategy (diskCacheStrategy. DATA) / / cache only the final image. DiskCacheStrategy (diskCacheStrategy. RESOURCE) .priority(Priority.HIGH) ; Glide. With (getApplicationContext()).load(URL).apply(options).into(view); }Copy the code

3. Perform ImageView loading

 loadImageView(ivPic,"http://b.hiphotos.baidu.com/image/pic/item/d52a2834349b033bda94010519ce36d3d439bdd5.jpg");
 
Copy the code

Detailed use of the tutorial and option configuration, recommended reference

Glide 4 Android Image loading framework most complete parsing (eight), with a comprehensive understanding of the use of Glide 4

What is the Glide core execution process?

Basic concept

type instructions
Data Represents the original, unmodified resource, corresponding to dataClass
Resource The modified resource corresponds to resourceClass
Transcoder Resource converters, such as BitmapBytesTranscoder (Bitmap converted into Bytes), GifDrawableBytesTranscoder
ResourceEncoder An interface to persist data. Note that this class does not correspond to a decoder, but to a local cache
ResourceDecoder Data decoders such as ByteBufferGifDecoder (convert ByteBuffer to GIFs), StreamBitmapDecoder (convert Stream to Bitmap)
ResourceTranscoder A resource converter that converts a given resource type to another resource type, such as a Bitmap to a Drawable and a Bitmap to Bytes
Transformation Such as FitCenter images, CircleCrop CenterCrop transformation, or according to a given wide high on the treatment of Bitmap BitmapDrawableTransformation
Target The carrier of Request, the loading class corresponding to various resources, contains the callback method of life cycle, convenient for developers to carry out the corresponding preparation and resource recovery work

The overall design

1. Construct the Request implementation class SingleRequest to initiate a loaded Request

2. Responsible for task creation, initiation, callback and resource management through EngineJob and DecodeJob

3. According to the resource type requested, the corresponding DateFetcher is finally matched to obtain Data Data

4. Obtain data for corresponding cache configuration

5. Decoded and converted according to the original Data to generate the Resource to be displayed eventually

6. Finally display the picture by calling back the corresponding method of Target

Description of key class functions

class Functional specifications
Glide Expose singleton static interface, build Request, configure resource type, cache policy, image processing, etc., you can directly complete simple image Request and filling through this class. Internal memory variables BitmapPool, MemoryCache, and ByteArrayPool are used to automatically clean up memory in low memory situations
RequestManagerRetriever Use to create a RequestManager object and bind it to the Context for its lifecycle
RequestManagerFragment An empty Fragment that Glide adds to an Activity or Fragment to control the binding life cycle
LifecycleListener Interface for listening on Activity or Fragment lifecycle methods
RequestManager Manage users and initiate requests. Resume, pause, and clear operations are supported
RequestBuilder Request creation, resource type configuration, thumbnail configuration, and default image manipulation by BaseRequestOptions
Engine Task creation, initiation, callback, and management of live and cached resources
EngineJob Schedule decodeJobs, add, remove resource callbacks, and notify callbacks
DecodeJob Implementing the Runnable interface, the core class for scheduling tasks, this is where all the grunt work for requests is done: processing resources from cache or raw, applying transformation animations, and transcode. It is responsible for obtaining different Generator loading data according to the cache type. After the data loading is successful, onDataFetcherReady method of DecodeJob is called to process the resource
ResourceCacheGenerator Try to fetch it from the modified resource cache, or if the cache fails, try to fetch it from DATA_CACHE
DataCacheGenerator Try to fetch data from the unmodified local cache, or from SourceGenerator if the cache misses
SourceGenerator Fetch from a raw resource, either a server or some local raw resource
DataFetcher Data loading interface that loads data through loadData and performs the corresponding callback
LoadPath Try to retrieve data based on a DataFetcher of a given data type, and then try to decode it through one or more decodePath
DecodePath Decode and transcode the resource based on the specified data type
Registry Manage the registration of components (data type + data processing)
ModelLoaderRegistry Register all loaders for data loading
ResourceDecoderRegistry Register decoders for all resource conversions
TranscoderRegistry Register all Transcoders that do special processing after decoder
ResourceEncoderRegistry Encoder registers all persistent Resource data
EncoderRegistry Register all encoders for persistent raw data

Code execution flow

First post a flow chart, it is recommended to analyze the source code combined with the flow chart, and then step by step analysis.

The following is mainly analyzed from three methods: with(), load(), and into().

with()

The with() method returns a RequestManger object to initiate the Request.

 public static RequestManager with(@NonNull Context context) {
    return getRetriever(context).get(context);
  }

Copy the code

2. GetRetriever (context), and finally return a RequestManagerRetriever object that is used to generate the RequestManager.

 private static RequestManagerRetriever getRetriever(@Nullable Context context) {
    // Context could be null for other reasons (ie the user passes in null), but in practice it will
    // only occur due to errors with the Fragment lifecycle.
    Preconditions.checkNotNull(
        context,
        "You cannot start a load on a not yet attached View or a Fragment where getActivity() "
            + "returns null (which usually occurs when getActivity() is called before the Fragment "
            + "is attached or after the Fragment is destroyed).");
    return Glide.get(context).getRequestManagerRetriever();
Copy the code

Glide. Get (contenxt) is used to initialize Glide, scan modules, register components, and so on.

3. Add the RequestManagerRetriever get(context) method

 public RequestManager get(@NonNull Context context) {
    if (context == null) {
      throw new IllegalArgumentException("You cannot start a load on a null Context");
    } else if(Util.isOnMainThread() && ! (context instanceof Application)) {if (context instanceof FragmentActivity) {
        return get((FragmentActivity) context);
      } else if (context instanceof Activity) {
        return get((Activity) context);
      } else if (context instanceof ContextWrapper) {
        returnget(((ContextWrapper) context).getBaseContext()); }}return getApplicationManager(context);
  }
Copy the code

Different RequestManager objects are created based on the context type and bound to the lifecycle. Called if the context is an Application object, binding ApplicationLifecycle.

 private RequestManager getApplicationManager(@NonNull Context context) {
    // Either an application context or we're on a background thread. if (applicationManager == null) { synchronized (this) { if (applicationManager == null) { //  Normally pause/resume is taken care of by the fragment we add to the fragment or // activity. However, in this case since the manager attached to the application will not // receive lifecycle events, we must force the manager to start resumed using // ApplicationLifecycle. // TODO(b/27524013): Factor out this Glide.get() call. Glide glide = Glide.get(context.getApplicationContext()); applicationManager = factory.build( glide, new ApplicationLifecycle(), new EmptyRequestManagerTreeNode(), context.getApplicationContext()); }}}Copy the code

4. If the context is an Activity or Fragment, the supportFragmentGet and FragmentGetd methods are called. Create RequestManagerFragment binding ActivityFragmentLifecycle for life cycle.

  private RequestManager fragmentGet(@NonNull Context context,
      @NonNull android.app.FragmentManager fm,
      @Nullable android.app.Fragment parentHint,
      boolean isParentVisible) {
    RequestManagerFragment current = getRequestManagerFragment(fm, parentHint, isParentVisible);
    RequestManager requestManager = current.getRequestManager();
    if (requestManager == null) {
      // TODO(b/27524013): Factor out this Glide.get() call.
      Glide glide = Glide.get(context);
      requestManager =
          factory.build(
              glide, current.getGlideLifecycle(), current.getRequestManagerTreeNode(), context);
      current.setRequestManager(requestManager);
    }
    return requestManager;
  }

Copy the code

To summarize: the with() method returns a RequestManger object with a lifecycle bound to the context type.

load()

1. The load() method ends up generating a RequestBuilder object that builds the Request and performs the Request action. A RequestBuilder object is first generated through the AS method

  public <ResourceType> RequestBuilder<ResourceType> as(
      @NonNull Class<ResourceType> resourceClass) {
    return new RequestBuilder<>(glide, this, resourceClass, context);
  }
Copy the code

2. Execute load()

  public RequestBuilder<TranscodeType> load(@Nullable String string) {
    return loadGeneric(string);
  }
Copy the code

3. Execute the loadGeneric method

 private RequestBuilder<TranscodeType> loadGeneric(@Nullable Object model) {
    this.model = model;
    isModelSet = true;
    return this;
  }
Copy the code

Save the load parameters and set isModelSet to true

Apply (RequestOptions RequestOptions)

  public RequestBuilder<TranscodeType> apply(@NonNull RequestOptions requestOptions) {
    Preconditions.checkNotNull(requestOptions);
    this.requestOptions = getMutableOptions().apply(requestOptions);
    return this;
  }
Copy the code

To summarize, the Load () method returns a RequestBuilder object that is used to build the Request using Apply to set the parameters of the Request.

into()

Into () is the most complex step in the whole process. Simply speaking, the source Data is loaded through the cache policy and the registered Moderload, and converted into the configured Resource, which is displayed on the Target at last.

1, RequestBuilder into method implementation

 private <Y extends Target<TranscodeType>> Y into(
      @NonNull Y target,
      @Nullable RequestListener<TranscodeType> targetListener,
      @NonNull RequestOptions options) {
    Util.assertMainThread();
    Preconditions.checkNotNull(target);
    if(! isModelSet) { throw new IllegalArgumentException("You must call #load() before calling #into()");
    }

    options = options.autoClone();
    Request request = buildRequest(target, targetListener, options);

    Request previous = target.getRequest();
    if(request.isEquivalentTo(previous) && ! isSkipMemoryCacheWithCompletePreviousRequest(options, previous)) { request.recycle(); // If the request is completed, beginning again will ensure the result is re-delivered, // triggering RequestListeners and Targets. If the request is failed, beginning again will // restart the request, giving it another chance to complete. If the request is already // running, we canlet it continue running without interruption.
      if(! Preconditions.checkNotNull(previous).isRunning()) { // Use the previous request rather than the new one to allowfor optimizations like skipping
        // setting placeholders, tracking and un-tracking Targets, and obtaining View dimensions
        // that are done in the individual Request.
        previous.begin();
      }
      return target;
    }

    requestManager.clear(target);
    target.setRequest(request);
    requestManager.track(target, request);

    return target;
  }
Copy the code

2. BuildRequest() is executed to generate the Request object. After a series of calls, the result is the following method, which returns an instance of the SingleRequest object

 private Request obtainRequest(
      Target<TranscodeType> target,
      RequestListener<TranscodeType> targetListener,
      RequestOptions requestOptions,
      RequestCoordinator requestCoordinator,
      TransitionOptions<?, ? super TranscodeType> transitionOptions,
      Priority priority,
      int overrideWidth,
      int overrideHeight) {
    return SingleRequest.obtain(
        context,
        glideContext,
        model,
        transcodeClass,
        requestOptions,
        overrideWidth,
        overrideHeight,
        priority,
        target,
        targetListener,
        requestListener,
        requestCoordinator,
        glideContext.getEngine(),
        transitionOptions.getTransitionFactory());
  }
Copy the code

Requestmanager.track (target, request); The implementation is as follows:

void track(@NonNull Target<? > target, @NonNull Request request) { targetTracker.track(target); requestTracker.runRequest(request); }Copy the code

4. There is a new class called RequestTracker that manages the Request lifecycle. RunRequest is implemented as follows:

 public void runRequest(@NonNull Request request) {
    requests.add(request);
    if(! isPaused) { request.begin(); }else {
      if (Log.isLoggable(TAG, Log.VERBOSE)) {
        Log.v(TAG, "Paused, delaying request"); } pendingRequests.add(request); }}Copy the code

Begin () is called to begin the Request.

 public void begin() {
    assertNotCallingCallbacks();
    stateVerifier.throwIfRecycled();
    startTime = LogTime.getLogTime();
    if (model == null) {
      if (Util.isValidDimensions(overrideWidth, overrideHeight)) {
        width = overrideWidth;
        height = overrideHeight;
      }
      // Only log at more verbose log levels if the user has set a fallback drawable, because
      // fallback Drawables indicate the user expects null models occasionally.
      int logLevel = getFallbackDrawable() == null ? Log.WARN : Log.DEBUG;
      onLoadFailed(new GlideException("Received null model"), logLevel);
      return;
    }

    if (status == Status.RUNNING) {
      throw new IllegalArgumentException("Cannot restart a running request");
    }

    if (status == Status.COMPLETE) {
      onResourceReady(resource, DataSource.MEMORY_CACHE);
      return;
    }

    // Restarts for requests that are neither complete nor running can be treated as new requests
    // and can run again from the beginning.

    status = Status.WAITING_FOR_SIZE;
    if (Util.isValidDimensions(overrideWidth, overrideHeight)) {
      onSizeReady(overrideWidth, overrideHeight);
    } else {
      target.getSize(this);
    }

    if ((status == Status.RUNNING || status == Status.WAITING_FOR_SIZE)
        && canNotifyStatusChanged()) {
      target.onLoadStarted(getPlaceholderDrawable());
    }
    if (IS_VERBOSE_LOGGABLE) {
      logV("finished run method in "+ LogTime.getElapsedMillis(startTime)); }}Copy the code

If COMPLETE is completed, the onResourceReady method is called, if WAITING_FOR_SIZE, the onSizeReady method is executed, We all know that Glide will generate the final Resource to display based on the actual View width and height displayed.

6. OnResourceReady implementation:

  public void onSizeReady(int width, int height) {
    stateVerifier.throwIfRecycled();
    if (IS_VERBOSE_LOGGABLE) {
      logV("Got onSizeReady in " + LogTime.getElapsedMillis(startTime));
    }
    if(status ! = Status.WAITING_FOR_SIZE) {return;
    }
    status = Status.RUNNING;

    float sizeMultiplier = requestOptions.getSizeMultiplier();
    this.width = maybeApplySizeMultiplier(width, sizeMultiplier);
    this.height = maybeApplySizeMultiplier(height, sizeMultiplier);

    if (IS_VERBOSE_LOGGABLE) {
      logV("finished setup for calling load in " + LogTime.getElapsedMillis(startTime));
    }
    loadStatus = engine.load(
        glideContext,
        model,
        requestOptions.getSignature(),
        this.width,
        this.height,
        requestOptions.getResourceClass(),
        transcodeClass,
        priority,
        requestOptions.getDiskCacheStrategy(),
        requestOptions.getTransformations(),
        requestOptions.isTransformationRequired(),
        requestOptions.isScaleOnlyOrNoTransform(),
        requestOptions.getOptions(),
        requestOptions.isMemoryCacheable(),
        requestOptions.getUseUnlimitedSourceGeneratorsPool(),
        requestOptions.getUseAnimationPool(),
        requestOptions.getOnlyRetrieveFromCache(),
        this);

    // This is a hack that's only useful for testing right now where loads complete synchronously // even though under any executor running on any  thread but the main thread, the load would // have completed asynchronously. if (status ! = Status.RUNNING) { loadStatus = null; } if (IS_VERBOSE_LOGGABLE) { logV("finished onSizeReady in " + LogTime.getElapsedMillis(startTime)); }}Copy the code

The engine load method is used to execute the request, and the subsequent caching strategy, data loading, and image conversion are all performed in the following steps

7, see Engine load method implementation

public <R> LoadStatus load( GlideContext glideContext, Object model, Key signature, int width, int height, Class<? > resourceClass, Class<R> transcodeClass, Priority priority, DiskCacheStrategy diskCacheStrategy, Map<Class<? >, Transformation<? >> transformations, boolean isTransformationRequired, boolean isScaleOnlyOrNoTransform, Options options, boolean isMemoryCacheable, boolean useUnlimitedSourceExecutorPool, boolean useAnimationPool, boolean onlyRetrieveFromCache, ResourceCallback cb) { Util.assertMainThread(); long startTime = VERBOSE_IS_LOGGABLE ? LogTime.getLogTime() : 0; EngineKey key = keyFactory.buildKey(model, signature, width, height, transformations, resourceClass, transcodeClass, options); EngineResource<? > active = loadFromActiveResources(key, isMemoryCacheable);if(active ! = null) { cb.onResourceReady(active, DataSource.MEMORY_CACHE);if (VERBOSE_IS_LOGGABLE) {
        logWithTimeAndKey("Loaded resource from active resources", startTime, key);
      }
      returnnull; } EngineResource<? > cached = loadFromCache(key, isMemoryCacheable);if(cached ! = null) { cb.onResourceReady(cached, DataSource.MEMORY_CACHE);if (VERBOSE_IS_LOGGABLE) {
        logWithTimeAndKey("Loaded resource from cache", startTime, key);
      }
      returnnull; } EngineJob<? > current = jobs.get(key, onlyRetrieveFromCache);if(current ! = null) { current.addCallback(cb);if (VERBOSE_IS_LOGGABLE) {
        logWithTimeAndKey("Added to existing load", startTime, key);
      }
      return new LoadStatus(cb, current);
    }

    EngineJob<R> engineJob =
        engineJobFactory.build(
            key,
            isMemoryCacheable,
            useUnlimitedSourceExecutorPool,
            useAnimationPool,
            onlyRetrieveFromCache);

    DecodeJob<R> decodeJob =
        decodeJobFactory.build(
            glideContext,
            model,
            key,
            signature,
            width,
            height,
            resourceClass,
            transcodeClass,
            priority,
            diskCacheStrategy,
            transformations,
            isTransformationRequired,
            isScaleOnlyOrNoTransform,
            onlyRetrieveFromCache,
            options,
            engineJob);

    jobs.put(key, engineJob);

    engineJob.addCallback(cb);
    engineJob.start(decodeJob);

    if (VERBOSE_IS_LOGGABLE) {
      logWithTimeAndKey("Started new load", startTime, key);
    }
    return new LoadStatus(cb, engineJob);
  }
Copy the code

In this case, the EngineKey is constructed to determine whether the memory cache is hit. Then determine whether the task already exists in the JOBS queue. Otherwise EngineJob and DecodeJob will be built, and EngineJob. Start (DecodeJob) will be used to execute the DecodeJob through the thread pool. DecodeJob implements the Runnable interface, so let’s look at the Run method of DecodeJob

8, DecodeJob run method finally executed runWrapped method, the implementation is as follows:

private void runWrapped() {
    switch (runReason) {
      case INITIALIZE:
        stage = getNextStage(Stage.INITIALIZE);
        currentGenerator = getNextGenerator();
        runGenerators();
        break;
      case SWITCH_TO_SOURCE_SERVICE:
        runGenerators();
        break;
      case DECODE_DATA:
        decodeFromRetrievedData();
        break;
      default:
        throw new IllegalStateException("Unrecognized run reason: "+ runReason); }}Copy the code

Different tasks are executed according to different runreasons. There are two types of tasks:

RunGenerators () : the load data

DecodeFromRetrievedData () : Processes the data that has already been loaded

RunReason Specifies the reason for executing the task again. There are three enumerated values: INITIALIZE: the task is scheduled for the first time

WITCH_TO_SOURCE_SERVICE: The local cache policy fails and attempts to obtain data again. When stage is stage.source, or the fetch fails and the execution and callback occurs on a different thread

DECODE_DATA: get data successfully, but execute and callback are not in the same thread, want to go back to their own thread to process data.

GetNextStage () is the policy for obtaining loaded resources. There are five policies: INITIALIZE, RESOURCE_CACHE, DATA_CACHE, SOURCE, and FINISHED

There are three data loading policies: RESOURCE_CACHE, DATA_CACHE, and SOURCE, which correspond to Generator:

ResourceCacheGenerator: Try to obtain it from the modified resource cache. If the cache fails to match, try to obtain it from DATA_CACHE

DataCacheGenerator: Try to fetch data from the unmodified local cache, or from SourceGenerator if the cache misses

SourceGenerator: Gets from an original resource, either a server or a local configuration of some original resource policy in DiskCacheStrategy. BaseRequestOptions: ALL NONE DATA RESOURCE AUTOMATIC (default, EncodeStrategy dependent on DataFetcher DATA source and ResourceEncoder)

private Stage getNextStage(Stage current) {
    switch (current) {
      case INITIALIZE:
        return diskCacheStrategy.decodeCachedResource()
            ? Stage.RESOURCE_CACHE : getNextStage(Stage.RESOURCE_CACHE);
      case RESOURCE_CACHE:
        return diskCacheStrategy.decodeCachedData()
            ? Stage.DATA_CACHE : getNextStage(Stage.DATA_CACHE);
      case DATA_CACHE:
        // Skip loading from source if the user opted to only retrieve the resource from cache.
        return onlyRetrieveFromCache ? Stage.FINISHED : Stage.SOURCE;
      case SOURCE:
      case FINISHED:
        return Stage.FINISHED;
      default:
        throw new IllegalArgumentException("Unrecognized stage: "+ current); }}Copy the code

10, getNextGenerator, according to the Stage to get into the appropriate after the Generator will perform currentGenerator. StartNext (), if midway startNext returns true, the direct callback, Otherwise, the SOURCE stage will eventually be obtained and the task will be rescheduled.

  private void runGenerators() {
    currentThread = Thread.currentThread();
    startFetchTime = LogTime.getLogTime();
    boolean isStarted = false;
    while(! isCancelled && currentGenerator ! = null && ! (isStarted = currentGenerator.startNext())) { stage = getNextStage(stage); currentGenerator = getNextGenerator();if (stage == Stage.SOURCE) {
        reschedule();
        return;
      }
    }
    // We've run out of stages and generators, give up. if ((stage == Stage.FINISHED || isCancelled) && ! isStarted) { notifyFailed(); } // Otherwise a generator started a new load and we expect to be called back in // onDataFetcherReady. }Copy the code

11. Here we analyze the execution of startNext of SourceGenerator as follows:

  public boolean startNext() {
    if(dataToCache ! = null) { Object data = dataToCache; dataToCache = null; cacheData(data); }if (sourceCacheGenerator ! = null &&sourceCacheGenerator.startNext()) {
      return true;
    }
    sourceCacheGenerator = null;

    loadData = null;
    boolean started = false;
    while(! started && hasNextModelLoader()) { loadData = helper.getLoadData().get(loadDataListIndex++);if(loadData ! = null && (helper.getDiskCacheStrategy().isDataCacheable(loadData.fetcher.getDataSource()) || helper.hasLoadPath(loadData.fetcher.getDataClass()))) { started =true; loadData.fetcher.loadData(helper.getPriority(), this); }}return started;
  }
Copy the code

Finally, the ModelLoader registered by Glide initialization will execute the corresponding loadData method, and finally call onDataFetcherReady() to obtain the DataSource. RunReason = runreason.decode_data. Triggers a call to decodeFromRetrievedData() to transform the source data

12. DecodeFromRetrievedData, after obtaining data successfully, it is processed, RunLoadPath (Data Data, DataSource DataSource,LoadPath<Data, ResourceType, R> path) Transform decode’s resources.

 private void decodeFromRetrievedData() {
    if (Log.isLoggable(TAG, Log.VERBOSE)) {
      logWithTimeAndKey("Retrieved data", startFetchTime,
          "data: " + currentData
              + ", cache key: " + currentSourceKey
              + ", fetcher: " + currentFetcher);
    }
    Resource<R> resource = null;
    try {
      resource = decodeFromData(currentFetcher, currentData, currentDataSource);
    } catch (GlideException e) {
      e.setLoggingDetails(currentAttemptingKey, currentDataSource);
      throwables.add(e);
    }
    if(resource ! = null) { notifyEncodeAndRelease(resource, currentDataSource); }else {
      runGenerators();
    }
  }
  
  
    @SuppressWarnings("unchecked") private <Data> Resource<R> decodeFromFetcher(Data data, DataSource dataSource) throws GlideException { LoadPath<Data, ? , R> path = decodeHelper.getLoadPath((Class<Data>) data.getClass());return runLoadPath(data, dataSource, path);
  }
Copy the code

13. In decodeFromRetrievedData(), the data decode and transform will execute notifyEncodeAndRelease method, in which notifyComplete(result, dataSource) is called, Callback. onResourceReady is then called as follows:

  @Override
  public void onResourceReady(Resource<R> resource, DataSource dataSource) {
    this.resource = resource;
    this.dataSource = dataSource;
    MAIN_THREAD_HANDLER.obtainMessage(MSG_COMPLETE, this).sendToTarget();
  }

Copy the code

The EngineJob’s handleResultOnMainThread () method is called as follows:

void handleResultOnMainThread() {
    stateVerifier.throwIfRecycled();
    if (isCancelled) {
      resource.recycle();
      release(false /*isRemovedFromQueue*/);
      return;
    } else if (cbs.isEmpty()) {
      throw new IllegalStateException("Received a resource without any callbacks to notify");
    } else if (hasResource) {
      throw new IllegalStateException("Already have resource");
    }
    engineResource = engineResourceFactory.build(resource, isCacheable);
    hasResource = true;

    // Hold on to resource for duration of request so we don't recycle it in the middle of // notifying if it synchronously released by one of the callbacks. engineResource.acquire(); listener.onEngineJobComplete(this, key, engineResource); //noinspection ForLoopReplaceableByForEach to improve perf for (int i = 0, size = cbs.size(); i < size; i++) { ResourceCallback cb = cbs.get(i); if (! isInIgnoredCallbacks(cb)) { engineResource.acquire(); cb.onResourceReady(engineResource, dataSource); } } // Our request is complete, so we can release the resource. engineResource.release(); release(false /*isRemovedFromQueue*/); }Copy the code

SingleRequest onResourceReady(Resource<? OnResourceReady (result, animation);

  private void onResourceReady(Resource<R> resource, R result, DataSource dataSource) {
    // We must call isFirstReadyResource before setting status.
    boolean isFirstResource = isFirstReadyResource();
    status = Status.COMPLETE;
    this.resource = resource;

    if (glideContext.getLogLevel() <= Log.DEBUG) {
      Log.d(GLIDE_TAG, "Finished loading " + result.getClass().getSimpleName() + " from "
          + dataSource + " for " + model + " with size [" + width + "x" + height + "] in "
          + LogTime.getElapsedMillis(startTime) + " ms");
    }

    isCallingCallbacks = true;
    try {
      if((requestListener == null || ! requestListener.onResourceReady(result, model, target, dataSource, isFirstResource)) && (targetListener == null || ! targetListener.onResourceReady(result, model, target, dataSource, isFirstResource))) { Transition<? super R> animation = animationFactory.build(dataSource, isFirstResource); target.onResourceReady(result, animation); } } finally { isCallingCallbacks =false;
    }

    notifyLoadSuccess();
  }
Copy the code

How is Glide tied to the lifecycle of activities, fragments, etc.?

Glide in the with phase, Glide binds its Request to the context type based on the context type. Application Indicates the entire Application life cycle. Fragment and Activity types. Add a RequestManagerFragment to an Activity or Fragment to implement lifecycle monitoring.

1. The binding of Application lifecycle is ApplicationLifecycle, which is consistent with the App lifecycle

  @Override
 public void addListener(@NonNull LifecycleListener listener) {
   listener.onStart();
 }

 @Override
 public void removeListener(@NonNull LifecycleListener listener) {
   // Do nothing.
 }

Copy the code

2, Activity, or fragments of binding for ActivityFragmentLifecycle, corresponding with the host and life cycle.

public interface LifecycleListener {

  /**
   * Callback for when {@link android.app.Fragment#onStart()}} or {@link
   * android.app.Activity#onStart()} is called.
   */
  void onStart();

  /**
   * Callback for when {@link android.app.Fragment#onStop()}} or {@link
   * android.app.Activity#onStop()}} is called.
   */
  void onStop();

  /**
   * Callback for when {@link android.app.Fragment#onDestroy()}} or {@link
   * android.app.Activity#onDestroy()} is called.
   */
  void onDestroy();
}
Copy the code

Create a listener interface in the RequestManger

  private final Runnable addSelfToLifecycle = new Runnable() {
    @Override
    public void run() { lifecycle.addListener(RequestManager.this); }};Copy the code

Note here that a page has a RequestManagerFragment that holds a reference to the RequestManger. If a page initiates more than one Glide display image request, the system preferentially obtains the RequestManger from the Fragment, and does not create multiple RequestManger objects repeatedly.

private RequestManager fragmentGet(@NonNull Context context,
      @NonNull android.app.FragmentManager fm,
      @Nullable android.app.Fragment parentHint,
      boolean isParentVisible) {
    RequestManagerFragment current = getRequestManagerFragment(fm, parentHint, isParentVisible);
    RequestManager requestManager = current.getRequestManager();
    if (requestManager == null) {
      // TODO(b/27524013): Factor out this Glide.get() call.
      Glide glide = Glide.get(context);
      requestManager =
          factory.build(
              glide, current.getGlideLifecycle(), current.getRequestManagerTreeNode(), context);
      current.setRequestManager(requestManager);
    }
    return requestManager;
  }
Copy the code

The callback bound in RequestManger is executed

  /**
   * Lifecycle callback that registers for connectivity events (if the
   * android.permission.ACCESS_NETWORK_STATE permission is present) and restarts failed or paused
   * requests.
   */
  @Override
  public void onStart() {
    resumeRequests();
    targetTracker.onStart();
  }

  /**
   * Lifecycle callback that unregisters for connectivity events (if the
   * android.permission.ACCESS_NETWORK_STATE permission is present) and pauses in progress loads.
   */
  @Override
  public void onStop() {
    pauseRequests();
    targetTracker.onStop();
  }

  /**
   * Lifecycle callback that cancels all in progress requests and clears and recycles resources for
   * all completed requests.
   */
  @Override
  public void onDestroy() {
    targetTracker.onDestroy();
    for(Target<? > target : targetTracker.getAll()) { clear(target); } targetTracker.clear(); requestTracker.clearRequests(); lifecycle.removeListener(this); lifecycle.removeListener(connectivityMonitor); mainHandler.removeCallbacks(addSelfToLifecycle); glide.unregisterRequestManager(this); }Copy the code

PauseRequests () are called to suspend requests when an Activity or Fragment recedes into the background, the request is re-executed when the Activity or Fragment returns to the foreground, and the corresponding resource cleanup and reclamation occurs when the page is destroyed.

How does Glide’s cache work?

Introduction to the

Glide cache uses memory cache and hard disk cache for processing.

The cache instructions
ActiveResources ActiveResources is a weakly referenced resource with value. Used to cache resources that are in use
MemoryCache MemoryCache is implemented using LruResourceCache and is used to cache resources that are not in use
DiskCache Perform resource disk caching
Http Load the resource file from the server using the network address

If memory cache and disk cache are configured, the main loading process is as follows:

1. When a Request is initiated, it is first cached from ActiveResources. If there is a hit, the display is returned; if not, the MemoryCache is retrieved. When resources are removed from ActiveResources, they are added to MemoryCache

2. When a MemoryCache hit, the resource is added to ActiveResources and removed from the Cache. If it misses, an attempt is made to load the resource from the disk Cache

3. According to the policy used for configuration, if there is a hit in the disk cache, it will be returned and the resource will be cached in ActiveResources. If there is no hit, the network request will be made

4. Load resources from the network according to the configuration of ModelLoader, and cache them to disk and memory cache according to the configuration

Key

According to the process analysis, we know that the Key is generated in the load method of Engine, the specific implementation is as follows:

 EngineKey key = keyFactory.buildKey(model, signature, width, height, transformations,
        resourceClass, transcodeClass, options);
Copy the code

It can be seen that in order to accommodate complex resource transformation and ensure the uniqueness of key, it contains a lot of parameters for construction. There are model (target address), signature (set signature), image width, heigh, resource conversion configuration, etc.

Memory cache

From the introduction above, we can see that Glide’s primary memory caching strategy uses level 2 caching, ActiveResources and MemoryCache. Let’s look at these two caching mechanisms from a source code perspective.

ActiveResources

1. According to the source code, we know that memory uses a HashMap for memory cache, and uses a weak reference ResourceWeakReference to hold Resource

 final Map<Key, ResourceWeakReference> activeEngineResources = new HashMap<>();
Copy the code

2. When obtaining resources, the get method is mainly used to obtain resources, as follows:

EngineResource<? > get(Key key) { ResourceWeakReference activeRef = activeEngineResources.get(key);if (activeRef == null) {
      returnnull; } EngineResource<? > active = activeRef.get();if (active == null) {
      cleanupActiveReference(activeRef);
    }
    return active;
  }
Copy the code

If it hits, the resource is returned. Note here that if the reference is reclaimed when active==null, the cleanupActiveReference method is called as follows:

void cleanupActiveReference(@NonNull ResourceWeakReference ref) {
    Util.assertMainThread();
    activeEngineResources.remove(ref.key);

    if(! ref.isCacheable || ref.resource == null) {return; } EngineResource<? > newResource = new EngineResource<>(ref.resource, /*isCacheable=*/true, /*isRecyclable=*/ false);
    newResource.setResourceListener(ref.key, listener);
    listener.onResourceReleased(ref.key, newResource);
  }

Copy the code

If the ref. Resource! If =null, an EngineResource object will be generated and the onResourceReleased method will be called.

@Override public void onResourceReleased(Key cacheKey, EngineResource<? > resource) { Util.assertMainThread(); activeResources.deactivate(cacheKey);if (resource.isCacheable()) {
      cache.put(cacheKey, resource);
    } else{ resourceRecycler.recycle(resource); }}Copy the code

From the source, the deactivate method is called, removed from activeResources, and then added to MemoryCache.

3. Write resources

void activate(Key key, EngineResource<? > resource) { ResourceWeakReference toPut = new ResourceWeakReference( key, resource, getReferenceQueue(), isActiveResourceRetentionAllowed); ResourceWeakReference removed = activeEngineResources.put(key, toPut);if (removed != null) {
      removed.reset();
    }
  }
Copy the code

EngineResource

The EngineResource is primarily for an Acquired variable

  private int acquired;
Copy the code

When a resource is used, acquire is called with the variable value +1

  void acquire() {
    if (isRecycled) {
      throw new IllegalStateException("Cannot acquire a recycled resource");
    }
    if(! Looper.getMainLooper().equals(Looper.myLooper())) { throw new IllegalThreadStateException("Must call acquire on the main thread");
    }
    ++acquired;
  }
Copy the code

When the resource is released, the release() method is called

 void release() {
    if (acquired <= 0) {
      throw new IllegalStateException("Cannot release a recycled or not yet acquired resource");
    }
    if(! Looper.getMainLooper().equals(Looper.myLooper())) { throw new IllegalThreadStateException("Must call release on the main thread");
    }
    if(--acquired == 0) { listener.onResourceReleased(key, this); }}Copy the code

If required == 0, the resource is not used, the onResourceReleased call will be made to the MemoryCache. In this way, images in use are cached using weak references and images not in use are cached using LruCache.

MemoryCache

Glide creates a MemoryCache object during a Build, which is implemented as follows:

 if (memoryCache == null) {
      memoryCache = new LruResourceCache(memorySizeCalculator.getMemoryCacheSize());
    }
Copy the code

From the source code, the main implementation of MemoryCache is using LRU algorithm, let’s look at the implementation of LruResourceCache. Found its inheritance with LruCache, the key implementation of LruCache is as follows:

Private final Map<T, Y> cache = new LinkedHashMap<>(100, 0.75f,true);

Copy the code

From the source, Glide memory cache LRU algorithm is mainly used to achieve the LinkedHashMap.

How to implement the LRU cache algorithm with LinkedHashMap

BitmapPool

Glide maintains an internal BitmapPool for reuse of bitmaps to optimize GC recycling. An example is given in GlideBuilder. The implementation is as follows:

 if (bitmapPool == null) {
      int size = memorySizeCalculator.getBitmapPoolSize();
      if (size > 0) {
        bitmapPool = new LruBitmapPool(size);
      } else{ bitmapPool = new BitmapPoolAdapter(); }}Copy the code

Let’s look at the implementation code of LruBitmapPool

1、 put

 @Override
  public synchronized void put(Bitmap bitmap) {
    if (bitmap == null) {
      throw new NullPointerException("Bitmap must not be null");
    }
    if (bitmap.isRecycled()) {
      throw new IllegalStateException("Cannot pool recycled bitmap");
    }
    if(! bitmap.isMutable() || strategy.getSize(bitmap) > maxSize || ! allowedConfigs.contains(bitmap.getConfig())) {if (Log.isLoggable(TAG, Log.VERBOSE)) {
        Log.v(TAG, "Reject bitmap from pool"
                + ", bitmap: " + strategy.logBitmap(bitmap)
                + ", is mutable: " + bitmap.isMutable()
                + ", is allowed config: " + allowedConfigs.contains(bitmap.getConfig()));
      }
      bitmap.recycle();
      return;
    }

    final int size = strategy.getSize(bitmap);
    strategy.put(bitmap);
    tracker.add(bitmap);

    puts++;
    currentSize += size;

    if (Log.isLoggable(TAG, Log.VERBOSE)) {
      Log.v(TAG, "Put bitmap in pool=" + strategy.logBitmap(bitmap));
    }
    dump();

    evict();
  }

Copy the code

After making a series of non-empty and reclaim judgments, bitmap was finally added into strategy and cache

2, get

  @Override
  @NonNull
  public Bitmap get(int width, int height, Bitmap.Config config) {
    Bitmap result = getDirtyOrNull(width, height, config);
    if(result ! = null) { // Bitmapsin the pool contain random data that in some cases must be cleared for an image
      // to be rendered correctly. we shouldn't force all consumers to independently erase the // contents individually, so we do so here. See issue #131. result.eraseColor(Color.TRANSPARENT); } else { result = createBitmap(width, height, config); } return result; }Copy the code

When a Bitmap needs to be generated, it is retrieved from the pool based on width, height and config. If one is not available, createBtimao is called to create one. If it hits, the Bitmap is extracted, the pixels are set to transparent, removed from the pool, and returned.

To summarize

Glide uses a level 2 memory cache, activeResources is a map with weak reference resources as values, and Memory is implemented using LruResourceCache. ActiveResources is a resource that can be reclaimed at any time. Its significance lies in the fact that frequent reads and writes of strong references to memory can also cause frequent GC surges and memory jitter. Resources are stored in activeResources during use, and activeResources are weakly referenced and can be reclaimed by the system at any time without causing memory leaks and excessive usage

Hard disk cache

Caching strategies

Glide cache resources are divided into two types (1, SOURCE, original picture, RESULT, compressed, deformed, etc.)

Hard disk cache is divided into five kinds, specific look at one side. You can do this by calling the diskCacheStrategy() method and passing in five different parameters

Diskcachestrategy. NONE// indicates that no content is cached

Diskcachestrategy. DATA// indicates that only raw images are cached

3, DiskCacheStrategy. The RESOURCE / / said only cache the image after the transformation

Diskcachestrategy. ALL // indicates that both the original and converted images are cached

5, AUTOMATIC / / said DiskCacheStrategy. Let Glide resources intelligently according to the pictures which one choose to use the cache strategy (the default option)

Cache fetching

1. According to the above process analysis, we know that the disk cache is executed in DecodeJob. When the task starts, runWrapped() method is called, and then getNextStage is called.

private Stage getNextStage(Stage current) {
    switch (current) {
      case INITIALIZE:
        return diskCacheStrategy.decodeCachedResource()
            ? Stage.RESOURCE_CACHE : getNextStage(Stage.RESOURCE_CACHE);
      case RESOURCE_CACHE:
        return diskCacheStrategy.decodeCachedData()
            ? Stage.DATA_CACHE : getNextStage(Stage.DATA_CACHE);
      case DATA_CACHE:
        // Skip loading from source if the user opted to only retrieve the resource from cache.
        return onlyRetrieveFromCache ? Stage.FINISHED : Stage.SOURCE;
      case SOURCE:
      case FINISHED:
        return Stage.FINISHED;
      default:
        throw new IllegalArgumentException("Unrecognized stage: "+ current); }}Copy the code

The getNextGenerator method is then called as follows:

 private DataFetcherGenerator getNextGenerator() {
    switch (stage) {
      case RESOURCE_CACHE:
        return new ResourceCacheGenerator(decodeHelper, this);
      case DATA_CACHE:
        return new DataCacheGenerator(decodeHelper, this);
      case SOURCE:
        return new SourceGenerator(decodeHelper, this);
      case FINISHED:
        return null;
      default:
        throw new IllegalStateException("Unrecognized stage: "+ stage); }}Copy the code

The following five policies are available: INITIALIZE, RESOURCE_CACHE, DATA_CACHE, SOURCE, and FINISHED

There are three data loading policies: RESOURCE_CACHE, DATA_CACHE, and SOURCE, which correspond to Generator:

ResourceCacheGenerator: Try to obtain it from the modified resource cache. If the cache fails to match, try to obtain it from DATA_CACHE

DataCacheGenerator: Try to fetch data from the unmodified local cache, or from SourceGenerator if the cache misses

SourceGenerator: Gets from an original resource, either a server or some local original resource

The startNext method of the concrete Generator is then called. The cache fetch implementation for the disk is obtained in this method. The key cache acquisition code for ResourceCacheGenerator is as follows:

 Key sourceId = sourceIds.get(sourceIdIndex); Class<? > resourceClass = resourceClasses.get(resourceClassIndex); Transformation<? > transformation = helper.getTransformation(resourceClass); // PMD.AvoidInstantiatingObjectsInLoops Each iteration is comparatively expensive anyway, // we only run until the first one succeeds, the loop runsfor only a limited
      // number of iterations on the order of 10-20 in the worst case.
      currentKey =
          new ResourceCacheKey(// NOPMD AvoidInstantiatingObjectsInLoops
              helper.getArrayPool(),
              sourceId,
              helper.getSignature(),
              helper.getWidth(),
              helper.getHeight(),
              transformation,
              resourceClass,
              helper.getOptions());
      cacheFile = helper.getDiskCache().get(currentKey);
      if(cacheFile ! = null) {sourceKey = sourceId;
        modelLoaders = helper.getModelLoaders(cacheFile);
        modelLoaderIndex = 0;
      }
Copy the code

The key cache fetch code for DataCacheGenerator is as follows:

  Key sourceId = cacheKeys.get(sourceIdIndex);
      // PMD.AvoidInstantiatingObjectsInLoops The loop iterates a limited number of times
      // and the actions it performs are much more expensive than a single allocation.
      @SuppressWarnings("PMD.AvoidInstantiatingObjectsInLoops")
      Key originalKey = new DataCacheKey(sourceId, helper.getSignature());
      cacheFile = helper.getDiskCache().get(originalKey);
      if(cacheFile ! = null) { this.sourceKey =sourceId; modelLoaders = helper.getModelLoaders(cacheFile); modelLoaderIndex = 0; }}Copy the code

Write to cache

1. Cache of Data

The main implementation of getting data from the server is in SourceGenerator, where onDataReady determines that isDataCacheable () will assign data to the dataToCache. Retriggers reschedule();

  @Override
  public void onDataReady(Object data) {
    DiskCacheStrategy diskCacheStrategy = helper.getDiskCacheStrategy();
    if(data ! = null && diskCacheStrategy.isDataCacheable(loadData.fetcher.getDataSource())) { dataToCache = data; // We might be being called back on someoneelse's thread. Before doing anything, we should // reschedule to get back onto Glide's thread.
      cb.reschedule();
    } else{ cb.onDataFetcherReady(loadData.sourceKey, data, loadData.fetcher, loadData.fetcher.getDataSource(), originalKey); }}Copy the code

When starting startNext again, the key implementation is as follows:

 @Override
  public boolean startNext() {
    if(dataToCache ! = null) { Object data = dataToCache; dataToCache = null; cacheData(data); }Copy the code

CacheData caches raw Data to a disk file as follows:

 private void cacheData(Object dataToCache) {
    long startTime = LogTime.getLogTime();
    try {
      Encoder<Object> encoder = helper.getSourceEncoder(dataToCache);
      DataCacheWriter<Object> writer =
          new DataCacheWriter<>(encoder, dataToCache, helper.getOptions());
      originalKey = new DataCacheKey(loadData.sourceKey, helper.getSignature());
      helper.getDiskCache().put(originalKey, writer);
      if (Log.isLoggable(TAG, Log.VERBOSE)) {
        Log.v(TAG, "Finished encoding source to cache"
            + ", key: " + originalKey
            + ", data: " + dataToCache
            + ", encoder: " + encoder
            + ", duration: " + LogTime.getElapsedMillis(startTime));
      }
    } finally {
      loadData.fetcher.cleanup();
    }

    sourceCacheGenerator =
        new DataCacheGenerator(Collections.singletonList(loadData.sourceKey), helper, this);
  }
Copy the code

2. Cache Resource data

According to the above process analysis, and the onResourceDecoded callback in DecodeJob, the key implementation is as follows:

 if (diskCacheStrategy.isResourceCacheable(isFromAlternateCacheKey, dataSource,
        encodeStrategy)) {
      if (encoder == null) {
        throw new Registry.NoResultEncoderAvailableException(transformed.get().getClass());
      }
      final Key key;
      switch (encodeStrategy) {
        case SOURCE:
          key = new DataCacheKey(currentSourceKey, signature);
          break;
        case TRANSFORMED:
          key =
              new ResourceCacheKey(
                  decodeHelper.getArrayPool(),
                  currentSourceKey,
                  signature,
                  width,
                  height,
                  appliedTransformation,
                  resourceSubClass,
                  options);
          break;
        default:
          throw new IllegalArgumentException("Unknown strategy: " + encodeStrategy);
      }

      LockedResource<Z> lockedResult = LockedResource.obtain(transformed);
      deferredEncodeManager.init(key, encoder, lockedResult);
      result = lockedResult;
    }
Copy the code

The DeferredEncodeManager object is initialized. The notifyEncodeAndRelease() method is then executed, and the key implementation is as follows:

 try {
      if (deferredEncodeManager.hasResourceToEncode()) {
        deferredEncodeManager.encode(diskCacheProvider, options);
      }
    } finally {
      if (lockedResource != null) {
        lockedResource.unlock();
      }
    }
Copy the code

Resource data is cached in encode, and the code is as follows:

   void encode(DiskCacheProvider diskCacheProvider, Options options) {
      GlideTrace.beginSection("DecodeJob.encode"); try { diskCacheProvider.getDiskCache().put(key, new DataCacheWriter<>(encoder, toEncode, options)); } finally { toEncode.unlock(); GlideTrace.endSection(); }}Copy the code

Implementation of caching

1, Glide Build method, we can see the disk cache factory example, code like this:

if (diskCacheFactory == null) {
      diskCacheFactory = new InternalCacheDiskCacheFactory(context);
    }
Copy the code

2, InternalCacheDiskCacheFactory inherited DiskLruCacheFactory, factory key build method is as follows:

 @Override
  public DiskCache build() {
    File cacheDir = cacheDirectoryGetter.getCacheDirectory();

    if (cacheDir == null) {
      return null;
    }

    if(! cacheDir.mkdirs() && (! cacheDir.exists() || ! cacheDir.isDirectory())) {return null;
    }

    return DiskLruCacheWrapper.create(cacheDir, diskCacheSize);
  }
Copy the code

DiskLruCacheWrapper = DiskLruCacheWrapper = DiskLruCacheWrapper;

 @Override
  public File get(Key key) {
    String safeKey = safeKeyGenerator.getSafeKey(key);
    if (Log.isLoggable(TAG, Log.VERBOSE)) {
      Log.v(TAG, "Get: Obtained: " + safeKey + " for for Key: " + key);
    }
    File result = null;
    try {
      // It is possible that the there will be a put in between these two gets. If so that shouldn't // be a problem because we will always put the same value at the same key so our input streams // will still represent the same data. final DiskLruCache.Value value = getDiskCache().get(safeKey); if (value ! = null) { result = value.getFile(0); } } catch (IOException e) { if (Log.isLoggable(TAG, Log.WARN)) { Log.w(TAG, "Unable to get from disk cache", e); } } return result; } @Override public void put(Key key, Writer writer) { // We want to make sure that puts block so that data is available when put completes. We may // actually not write any data if we find that data is written by the time we acquire the lock. String safeKey = safeKeyGenerator.getSafeKey(key); writeLocker.acquire(safeKey); try { if (Log.isLoggable(TAG, Log.VERBOSE)) { Log.v(TAG, "Put: Obtained: " + safeKey + " for for Key: " + key); } try { // We assume we only need to put once, so if data was written while we were trying to get // the lock, we can simply abort. DiskLruCache diskCache = getDiskCache(); Value current = diskCache.get(safeKey); if (current ! = null) { return; } DiskLruCache.Editor editor = diskCache.edit(safeKey); if (editor == null) { throw new IllegalStateException("Had two simultaneous puts for: " + safeKey); } try { File file = editor.getFile(0); if (writer.write(file)) { editor.commit(); } } finally { editor.abortUnlessCommitted(); } } catch (IOException e) { if (Log.isLoggable(TAG, Log.WARN)) { Log.w(TAG, "Unable to put to disk cache", e); } } } finally { writeLocker.release(safeKey); }}Copy the code

4. The disk cache is implemented as DiskLruCache. For details about DiskLruCache, see the following blog posts

DiskLruCache cache

What is the underlying network implementation of Glide?

Through the above process analysis, we know that the startNext() method in SourceGenerator is used to load the data through the DataFetcher corresponding to the initial registered ModelLoader. Let’s load a GlideUrl

In Glide’s constructor, Register registers the ModelLoader with the following code:

 .append(GlideUrl.class, InputStream.class, new HttpGlideUrlLoader.Factory())
Copy the code

In the startNext() method in SourceGenerator, the corresponding ModelLoader will be cyclically matched as follows:

  boolean started = false;
    while(! started && hasNextModelLoader()) { loadData = helper.getLoadData().get(loadDataListIndex++);if(loadData ! = null && (helper.getDiskCacheStrategy().isDataCacheable(loadData.fetcher.getDataSource()) || helper.hasLoadPath(loadData.fetcher.getDataClass()))) { started =true; loadData.fetcher.loadData(helper.getPriority(), this); }}Copy the code

HttpGlideUrlLoader implements LoadData as HttpUrlFetcher:

 @Override
  public LoadData<InputStream> buildLoadData(@NonNull GlideUrl model, int width, int height,
      @NonNull Options options) {
    // GlideUrls memoize parsed URLs so caching them saves a few object instantiations and time
    // spent parsing urls.
    GlideUrl url = model;
    if(modelCache ! = null) { url = modelCache.get(model, 0, 0);if (url == null) {
        modelCache.put(model, 0, 0, model);
        url = model;
      }
    }
    int timeout = options.get(TIMEOUT);
    return new LoadData<>(url, new HttpUrlFetcher(url, timeout));
  }
Copy the code

The implementation of HttpUrlFetcher to load network data is as follows:

@Override
  public void loadData(@NonNull Priority priority,
      @NonNull DataCallback<? super InputStream> callback) {
    long startTime = LogTime.getLogTime();
    try {
      InputStream result = loadDataWithRedirects(glideUrl.toURL(), 0, null, glideUrl.getHeaders());
      callback.onDataReady(result);
    } catch (IOException e) {
      if (Log.isLoggable(TAG, Log.DEBUG)) {
        Log.d(TAG, "Failed to load data for url", e);
      }
      callback.onLoadFailed(e);
    } finally {
      if (Log.isLoggable(TAG, Log.VERBOSE)) {
        Log.v(TAG, "Finished http url fetcher fetch in " + LogTime.getElapsedMillis(startTime));
      }
    }
  }
  
   private InputStream loadDataWithRedirects(URL url, int redirects, URL lastUrl,
      Map<String, String> headers) throws IOException {
    if (redirects >= MAXIMUM_REDIRECTS) {
      throw new HttpException("Too many (> " + MAXIMUM_REDIRECTS + ") redirects!");
    } else {
      // Comparing the URLs using .equals performs additional network I/O and is generally broken.
      // See http://michaelscharf.blogspot.com/2006/11/javaneturlequals-and-hashcode-make.html.
      try {
        if(lastUrl ! = null && url.toURI().equals(lastUrl.toURI())) { throw new HttpException("In re-direct loop");

        }
      } catch (URISyntaxException e) {
        // Do nothing, this is best effort.
      }
    }

    urlConnection = connectionFactory.build(url);
    for (Map.Entry<String, String> headerEntry : headers.entrySet()) {
      urlConnection.addRequestProperty(headerEntry.getKey(), headerEntry.getValue());
    }
    urlConnection.setConnectTimeout(timeout);
    urlConnection.setReadTimeout(timeout);
    urlConnection.setUseCaches(false);
    urlConnection.setDoInput(true); // Stop the urlConnection instance of HttpUrlConnection from following redirects so that // redirects will be handled by  recursive calls to this method, loadDataWithRedirects. urlConnection.setInstanceFollowRedirects(false);

    // Connect explicitly to avoid errors in decoders if connection fails.
    urlConnection.connect();
    // Set the stream so that it's closed in cleanup to avoid resource leaks. See #2352. stream = urlConnection.getInputStream(); if (isCancelled) { return null; } final int statusCode = urlConnection.getResponseCode(); if (isHttpOk(statusCode)) { return getStreamForSuccessfulRequest(urlConnection); } else if (isHttpRedirect(statusCode)) { String redirectUrlString = urlConnection.getHeaderField("Location"); if (TextUtils.isEmpty(redirectUrlString)) { throw new HttpException("Received empty or null redirect url"); } URL redirectUrl = new URL(url, redirectUrlString); // Closing the stream specifically is required to avoid leaking ResponseBodys in addition // to disconnecting the url connection below. See #2352. cleanup(); return loadDataWithRedirects(redirectUrl, redirects + 1, url, headers); } else if (statusCode == INVALID_STATUS_CODE) { throw new HttpException(statusCode); } else { throw new HttpException(urlConnection.getResponseMessage(), statusCode); }}Copy the code

Through the analysis, it can be seen that the default Glide network load uses urlConnection. Of course, we can also customize ModelLoader, using okHTTP, Volley and other network framework for loading.

For details, please refer to Blog Glide 4.x to add custom components

What design patterns are used in Glide code? Are there any clever designs?

1. Builder mode

Glide object creation uses Build mode to separate the creation and presentation of complex objects. Callers do not need to know the complex creation process, and use Build related methods to configure the creation of objects.

2. Appearance mode

Glide provides uniform external scheduling, shielding internal implementations, making it easy and convenient to use the network library.

3. Strategy mode

As for DataFetcherGenerator resource acquisition in DecodeJob, policy mode is adopted to encapsulate different algorithms of data loading.

4. Factory mode

The creation of ModelLoader uses ModelLoaderFactory, EngineJobFactory in Engine, DiskLruCacheFactory and so on

conclusion

thinking

Glide is because of its powerful function, efficient operation mechanism, so the implementation of the source code is very complex. In the process of learning also encountered a lot of difficulties, and finally stick to it step by step. Sometimes give up is a moment of thought, but stick to it, will eventually have a harvest.

The resources

Glide 4 Android Image loading framework most complete parsing (eight), with a comprehensive understanding of the use of Glide 4

How to implement LRU caching algorithm with LinkedHashMap

DiskLruCache cache

Glide parsing – cache

Glide 4.x Adds the principle of custom components

Android Glide source code analysis

recommended

Android source code series – decrypt OkHttp

Android source code series – Decrypt Retrofit

Android source code series – Decrypt Glide

Android source code series – Decrypt EventBus

Android source code series – decrypt RxJava

Android source code series – Decrypt LeakCanary

Android source code series – decrypt BlockCanary

about

Welcome to pay attention to my personal public number

Wechat search: Yizhaofusheng, or search the official ID: Life2Code

  • Author: Huang Junbin
  • Blog: junbin. Tech
  • GitHub: junbin1011
  • [zhihu: @ JunBin] (www.zhihu.com/people/junb…