preface
I recently came up with the idea of doing an in-depth analysis of the main Android open source framework, and then writing a series of articles, including detailed usage and source code analysis of the framework. The purpose is to understand the underlying principle of the framework by appreciating the source code of god, that is, to do not only know it, but also know why.
Here I say their own experience reading source code, I generally in accordance with the peacetime use of a framework or a system source code process, first of all to know how to use, and then go to the bottom of each step to do what, with what good design patterns, why so design.
Series of articles:
- Android mainstream open source framework (a) OkHttp -HttpClient and HttpURLConnection use details
- Android main open source framework (ii) OkHttp usage details
- Android mainstream open source framework (three) OkHttp source code analysis
- Android mainstream open source framework (iv) Retrofit usage details
- Android mainstream open source framework (v) Retrofit source code analysis
- Android mainstream open source framework (six) Glide execution process source code analysis
- Android mainstream open source framework (7) Glide cache mechanism
- More frameworks continue to be updated…
Check out AndroidNotes for more dry stuff
The last article mainly talked about Glide execution process, at that time was disabled memory and disk cache, so related to the cache related process are omitted, have not read the previous advice to go to see again, because this article is a lot of cache mechanism is to be linked with the last.
A, Glide cache
By default, Glide checks for one of the following caches before loading an image:
- Active Resources: Images in use
- Memory cache: Images in Memory cache
- Resource: converted image in disk cache
- Data source: Raw image in disk cache
This means that Glide actually has four levels of caches, the first two being in-memory caches and the last two being disk caches. Each step above is checked in order, check to which step there is cache directly return the picture, otherwise continue to check the next step. If neither is cached, Glide loads the image from the original resource (File, Uri, remote image URL, and so on).
Cache Key
The cache function must have a unique cache Key to store and find the corresponding cache data. So let’s look at how Glide’s cache Key is generated. The Engine class load() method is used to generate the Engine class load() method:
/*Engine*/
public <R> LoadStatus load(
GlideContext glideContext,
Object model,
Key signature,
int width,
intheight, Class<? > resourceClass, Class<R> transcodeClass, Priority priority, DiskCacheStrategy diskCacheStrategy, Map<Class<? >, Transformation<? >> transformations,boolean isTransformationRequired,
boolean isScaleOnlyOrNoTransform,
Options options,
boolean isMemoryCacheable,
boolean useUnlimitedSourceExecutorPool,
boolean useAnimationPool,
boolean onlyRetrieveFromCache,
ResourceCallback cb,
Executor callbackExecutor) { EngineKey key = keyFactory.buildKey( model, signature, width, height, transformations, resourceClass, transcodeClass, options); . }Copy the code
Follow up:
/*EngineKeyFactory*/
EngineKey buildKey(
Object model,
Key signature,
int width,
intheight, Map<Class<? >, Transformation<? >> transformations, Class<? > resourceClass, Class<? > transcodeClass, Options options) {
return new EngineKey(
model, signature, width, height, transformations, resourceClass, transcodeClass, options);
}
Copy the code
class EngineKey implements Key {...@Override
public boolean equals(Object o) {
if (o instanceof EngineKey) {
EngineKey other = (EngineKey) o;
return model.equals(other.model)
&& signature.equals(other.signature)
&& height == other.height
&& width == other.width
&& transformations.equals(other.transformations)
&& resourceClass.equals(other.resourceClass)
&& transcodeClass.equals(other.transcodeClass)
&& options.equals(other.options);
}
return false;
}
@Override
public int hashCode(a) {
if (hashCode == 0) {
hashCode = model.hashCode();
hashCode = 31 * hashCode + signature.hashCode();
hashCode = 31 * hashCode + width;
hashCode = 31 * hashCode + height;
hashCode = 31 * hashCode + transformations.hashCode();
hashCode = 31 * hashCode + resourceClass.hashCode();
hashCode = 31 * hashCode + transcodeClass.hashCode();
hashCode = 31 * hashCode + options.hashCode();
}
returnhashCode; }... }Copy the code
As you can see, parameters such as model (File, Uri, remote image URL, etc.), signature, width and height (width and height here refers to the width and height of the View that displays the image, not the image) are passed in. It then builds a EngineKey object (the cache Key) through its EngineKeyFactory, which ensures that the cache Key is unique by overriding equals() and hashCode() methods.
Although there are many parameters that determine the cache Key, these parameters will not change after the loading image code is written. Met a lot of people “the picture of the server returns have changed, but the front or the previous image display” problem it is for this reason, although because the server returns the image changed, but the image url is still the same, other decision cache Key parameters will not change, Glide is thought to have the cache, will directly from the cache, Instead of downloading it again, it shows the same image as before.
There are several ways to solve this problem, which are as follows: (1) The image URL should not be fixed, that is, if an image changes, the IMAGE URL should also change.
We just learned that the parameters that determine the cache Key include signature, and Glide provides the signature() method to change this parameter. Details are as follows:
Glide.with(this).load(url).signature(new ObjectKey(timeModified)).into(imageView);
Copy the code
Where timeModified can be any data, here use the change time of the picture. For example, if the image changes, the server should change the value of this field and return it to the front end along with the image URL, so that the front end knows that the image has changed and needs to be downloaded again.
(3) Disable the cache set to disable the memory and disk cache when loading images, so that each loading will be re-download the latest.
Glide.with(this)
.load(url)
.skipMemoryCache(true) // Disable memory caching
.diskCacheStrategy(DiskCacheStrategy.NONE) // Disable disk caching
.into(imageView);
Copy the code
The above three methods can solve the problem, but the first one is recommended, so the design is more standard, background personnel should be designed in this way. The second method can also be used, but this is undoubtedly to the back end, the front end personnel have increased trouble. The third one is the least recommended, which is equivalent to abandoning the cache function and downloading images from the server again every time, which not only wastes user traffic, but also affects user experience by waiting for each loading.
Cache strategy
Before we talk about memory caching and disk caching in Glide, let’s look at the cache strategy. For example, a caching strategy for loading an image to display on a device should look like this:
When the program first loads an image from the network, it caches it to the device’s disk so that the next time you use the image, you don’t have to load it from the network. To improve the user experience, a copy is often cached in memory, because loading images from memory is faster than loading images from disk. The next time the program loads the image, it first looks for it in memory. If it doesn’t, it looks for it on the disk. If it doesn’t, it loads it from the network.
The cache strategy here involves adding, fetching, and removing caches, and the logic of when to perform these operations constitutes a caching algorithm. Currently, LRU (Least Recently Used) is the most Recently Used cache algorithm. The core idea is that when the cache is full, it prioritizes the least recently used cache objects. LruCache implements memory caching and DiskLruCache implements disk caching. The above cache policy can be implemented by combining LruCache and DiskLruCache.
The internal algorithm principle of LruCache and DiskLruCache is to use a LinkedHashMap to store external cache objects in the way of strong reference, and it provides get() and PUT () methods to complete the operation of obtaining and adding cache. When the cache is full, older cache objects are removed and new cache objects are added. It can be represented by the following flow chart:
Glide memory cache and disk cache is also used LruCache and DiskLruCache, but LruCache is not used in the SDK, but their own writing, but look at the principle is actually the same. DiskLruCache uses JakeWharton packaged DiskLruCache.
4. Memory cache
Glide is configured with memory cache by default. It also provides API for us to enable and disable it, as follows:
// Enable memory caching
Glide.with(this).load(url).skipMemoryCache(false).into(imageView);
// Disable memory caching
Glide.with(this).load(url).skipMemoryCache(true).into(imageView);
Copy the code
As stated at the beginning of this article, Glide checks the level 4 cache before loading an image. Now that the cache Key is in hand, let’s take a look at how the first two levels of memory cache are obtained. We learned from the previous article that the code for retrieving the memory cache is also in the load() method of the Engine class.
/*Engine*/
public <R> LoadStatus load(...). {
// Build the cache KeyEngineKey key = keyFactory.buildKey( model, signature, width, height, transformations, resourceClass, transcodeClass, options); EngineResource<? > memoryResource;synchronized (this) {
// Load cached data from memorymemoryResource = loadFromMemory(key, isMemoryCacheable, startTime); . }// The load completes the callback
cb.onResourceReady(memoryResource, DataSource.MEMORY_CACHE);
return null;
}
Copy the code
Go ahead and click on the loadFromMemory() method:
/*Engine*/
privateEngineResource<? > loadFromMemory( EngineKey key,boolean isMemoryCacheable, long startTime) {
if(! isMemoryCacheable) {return null;
}
/ / (1)EngineResource<? > active = loadFromActiveResources(key);if(active ! =null) {
if (VERBOSE_IS_LOGGABLE) {
logWithTimeAndKey("Loaded resource from active resources", startTime, key);
}
return active;
}
/ / (2)EngineResource<? > cached = loadFromCache(key);if(cached ! =null) {
if (VERBOSE_IS_LOGGABLE) {
logWithTimeAndKey("Loaded resource from cache", startTime, key);
}
return cached;
}
return null;
}
Copy the code
Here I mark two concerns as follows:
- (1) : indicates that cached data is loaded from ActiveResources.
- (2) : indicates loading cached data from the memory cache.
Yes, these are the first two tiers of Glide Level 4 cache. ActiveResources contains a HashMap operation, and the values stored in the HashMap are weakly referenced, meaning that a weakly referenced HashMap is used to cache ActiveResources. Let’s analyze these two concerns:
- Focus (1) in Engine#loadFromMemory()
Let’s click on focus (1) to see:
/*Engine*/
privateEngineResource<? > loadFromActiveResources(Key key) { EngineResource<? > active = activeResources.get(key);if(active ! =null) {
active.acquire();
}
return active;
}
Copy the code
Moving on to the get() method:
/*ActiveResources*/
synchronizedEngineResource<? > get(Key key) {// Get ResourceWeakReference from HashMap
ResourceWeakReference activeRef = activeEngineResources.get(key);
if (activeRef == null) {
return null;
}
// Get active resources from weak referencesEngineResource<? > active = activeRef.get();if (active == null) {
cleanupActiveReference(activeRef);
}
return active;
}
Copy the code
As you can see, you first get the ResourceWeakReference from the HashMap (inheriting the weak reference), and then get the active resource from the weak reference (getting the active resource), which is the image being used. That is, the images being used are actually maintained by weak references and saved in the HashMap.
Continue with acquire() methods:
/*EngineResource*/
synchronized void acquire(a) {
if (isRecycled) {
throw new IllegalStateException("Cannot acquire a recycled resource");
}
++acquired;
}
Copy the code
The acquired variable +1 is found here, which is used to record The Times of image being referenced. Acquire () = acquire(+1); release() = release(-1);
/*EngineResource*/
void release(a) {
boolean release = false;
synchronized (this) {
if (acquired <= 0) {
throw new IllegalStateException("Cannot release a recycled or not yet acquired resource");
}
if (--acquired == 0) {
release = true; }}if (release) {
listener.onResourceReleased(key, this); }}/*Engine*/
@Override
public void onResourceReleased(Key cacheKey, EngineResource
resource) {
activeResources.deactivate(cacheKey);
if (resource.isMemoryCacheable()) {
cache.put(cacheKey, resource);
} else {
resourceRecycler.recycle(resource, /*forceNextFrame=*/ false); }}/*ActiveResources*/
synchronized void deactivate(Key key) {
ResourceWeakReference removed = activeEngineResources.remove(key);
if(removed ! =null) { removed.reset(); }}Copy the code
When the required call has decreased to 0, the call of Engine#onResourceReleased() will In the onResourceReleased() method, first remove the active resource from the weakly-referenced HashMap (clear the call of the active resource), and then cache it into the memory cache (store the memory cache).
That is, the release() method basically frees resources. This method is called when we swipe from one screen to the next and the previous image is not visible. The onDestroy() method is also called when we close the page that currently displays the image, and will eventually be called as well. In both cases, the image is clearly not needed, so it makes sense to call the release() method to release the active resources cached in the weakly referenced HashMap.
In this way, images in use are cached using weak references and images not in use are cached using LruCache.
- Focus (2) in Engine#loadFromMemory()
Let’s click on focus (2) to see:
/*Engine*/
privateEngineResource<? > loadFromCache(Key key) {/ / (2.1)EngineResource<? > cached = getEngineResourceFromCache(key);if(cached ! =null) {
/ / (2.2)
cached.acquire();
/ / (2.3)
activeResources.activate(key, cached);
}
return cached;
}
Copy the code
Here I have marked three concerns as follows:
- (2.1) : Here is to get the memory cache. Click inside to see:
/*Engine*/
privateEngineResource<? > getEngineResourceFromCache(Key key) { Resource<? > cached = cache.remove(key);finalEngineResource<? > result;if (cached == null) {
result = null;
} else if (cached instanceof EngineResource) {
// Save an object allocation if we've cached an EngineResource (the typical case).result = (EngineResource<? >) cached; }else {
result =
new EngineResource<>(
cached, /*isMemoryCacheable=*/ true./*isRecyclable=*/ true, key, /*listener=*/ this);
}
return result;
}
Copy the code
As you can see, the cache is LruResourceCache, and the remove() operation removes the cache and fetches it. LruResourceCache inherits LruCache, although it is not LruCache in SDK, but it is the same principle, that is, memory cache is implemented using LRU algorithm.
- (2.2) : Similar to the acquisition of activity resources in concern (1), the acquired variable +1 is also used to record The Times of image being referenced.
- (2.3) : Cache the cache data obtained in memory into the weakly referenced HashMap.
Looking back at the text I highlighted, I see that these two concerns are mainly about getting active resources, cleaning up active resources, getting memory cache, and storing memory cache. Among them, the operation of clearing the memory cache has been automatically implemented by the LRU algorithm. Is it found that the steps of storing active resources are missing?
Where did the activity resources come from? It’s basically the data that we return from the network request. As you can see from the previous article, onEngineJobComplete() decodes the network request and stores the active resource in the Engine#onEngineJobComplete() method.
/*Engine*/
@Override
public synchronized void onEngineJobComplete( EngineJob
engineJob, Key key, EngineResource
resource) {
// A null resource indicates that the load failed, usually due to an exception.
if(resource ! =null && resource.isMemoryCacheable()) {
activeResources.activate(key, resource);
}
jobs.removeIfCurrent(key, engineJob);
}
/*ActiveResources*/
synchronized void activate(Key key, EngineResource
resource) {
ResourceWeakReference toPut =
new ResourceWeakReference(
key, resource, resourceReferenceQueue, isActiveResourceRetentionAllowed);
// Store active resources
ResourceWeakReference removed = activeEngineResources.put(key, toPut);
if(removed ! =null) { removed.reset(); }}Copy the code
This is Glide memory caching, but we found that in addition to memory caching using LruCache, there is another implementation using weakly referenced HashMap. Generally, if we were to design, we would probably only think of memory caching with LruCache. So what’s the benefit of designing a HashMap with one more weak reference?
Using activeResources to cache images in use protects them from being recycled by the LruCache algorithm. I don’t think that’s a good explanation, and I don’t think that the weak-referenced HashMap does anything to “protect images from being recycled by the LRU algorithm.” I think it has the following functions (please point out if there is any mistake) : (1) Improve access efficiency because ActiveResources uses HashMap and LruCache uses LinkedHashMap, and the access order is set when instantiating LinkedHashMap (set below). So HashMap access is faster than LinkedHashMap.
// accessOrder is set to true to indicate sequential access mode
Map<T, Y> cache = new LinkedHashMap<>(100.0.75 f.true);
Copy the code
The HashMap in ActiveResources is weakly reference-maintained, while the LinkedHashMap in LruCache uses strong references. Because weak reference objects are always collected by the GC, memory leaks can be prevented. Here are the differences:
- Strong references: Direct object references.
- Soft references: When only soft references exist for an object, the object is reclaimed by gc when the system runs out of memory.
- Weak references: When only weak references exist for an object, the object is always collected by the GC.
5. Disk cache
5.1 Disk Caching Policy
As mentioned earlier, disabling caching requires only the following Settings:
Glide.with(this).load(url).diskCacheStrategy(DiskCacheStrategy.NONE).into(imageView);
Copy the code
DiskCacheStrategy encapsulates a disk caching strategy. There are several strategies:
- ALL: caches both original and converted images.
- NONE: No content is cached.
- DATA: Only raw images are cached.
- RESOURCE: Caches only converted images.
- AUTOMATIC: The default policy, which tries to use the best policy for local and remote images. If it is a remote image, only the original image is cached. If the image is local, only the converted image is cached.
In fact, 5 kinds of strategies are summarized corresponding to the last two levels of cache mentioned at the beginning of the article, that is, Resource type (Resource) and Data source (Data), the following source code to analyze where they are obtained, stored and cleaned cache.
5.2 Resource Type (Resource)
This level of cache only caches images after conversion, so we need to configure the following policy first:
Glide.with(this).load(url).diskCacheStrategy(DiskCacheStrategy.RESOURCE).into(imageView);
Copy the code
As you can see from the previous article, we used disk caching when switching from the main thread to the child thread to execute the request, so we’ll start with the run() method of the DecodeJob task directly:
/*DecodeJob*/
@Override
public void run(a) {...try {
/ / execution
runWrapped();
} catch (CallbackException e) {
throwe; }... }Copy the code
Continue the runWrapped() method:
/*DecodeJob*/
private void runWrapped(a) {
switch (runReason) {
case INITIALIZE:
1. Obtain the resource status
stage = getNextStage(Stage.INITIALIZE);
// 2. Obtain the resource actuator based on the resource status
currentGenerator = getNextGenerator();
/ / 3. The execution
runGenerators();
break;
case SWITCH_TO_SOURCE_SERVICE:
runGenerators();
break;
case DECODE_DATA:
decodeFromRetrievedData();
break;
default:
throw new IllegalStateException("Unrecognized run reason: "+ runReason); }}/*DecodeJob*/
private Stage getNextStage(Stage current) {
switch (current) {
case INITIALIZE:
return diskCacheStrategy.decodeCachedResource()
? Stage.RESOURCE_CACHE
: getNextStage(Stage.RESOURCE_CACHE);
case RESOURCE_CACHE:
return diskCacheStrategy.decodeCachedData()
? Stage.DATA_CACHE
: getNextStage(Stage.DATA_CACHE);
case DATA_CACHE:
// Skip loading from source if the user opted to only retrieve the resource from cache.
return onlyRetrieveFromCache ? Stage.FINISHED : Stage.SOURCE;
case SOURCE:
case FINISHED:
return Stage.FINISHED;
default:
throw new IllegalArgumentException("Unrecognized stage: "+ current); }}/*DecodeJob*/
private DataFetcherGenerator getNextGenerator(a) {
switch (stage) {
case RESOURCE_CACHE:
return new ResourceCacheGenerator(decodeHelper, this);
case DATA_CACHE:
return new DataCacheGenerator(decodeHelper, this);
case SOURCE:
return new SourceGenerator(decodeHelper, this);
case FINISHED:
return null;
default:
throw new IllegalStateException("Unrecognized stage: "+ stage); }}Copy the code
The resource state is obtained based on the cache policy, the resource executor is obtained based on the resource state, and the runGenerators() method is called:
/*DecodeJob*/
private void runGenerators(a) {
currentThread = Thread.currentThread();
startFetchTime = LogTime.getLogTime();
boolean isStarted = false;
while(! isCancelled && currentGenerator ! =null
&& !(isStarted = currentGenerator.startNext())) {
stage = getNextStage(stage);
currentGenerator = getNextGenerator();
if (stage == Stage.SOURCE) {
reschedule();
return; }}}Copy the code
As you can see, the startNext() method of the current executor is called in this method. Since we configured the cache policy as RESOURCE, we look directly at the startNext() method of ResourceCacheGenerator:
/*ResourceCacheGenerator*/
@Override
public boolean startNext(a) {...while (modelLoaders == null| |! hasNextModelLoader()) { .../ / (1)
currentKey =
new ResourceCacheKey(
helper.getArrayPool(),
sourceId,
helper.getSignature(),
helper.getWidth(),
helper.getHeight(),
transformation,
resourceClass,
helper.getOptions());
/ / (2)
cacheFile = helper.getDiskCache().get(currentKey);
if(cacheFile ! =null) {
sourceKey = sourceId;
modelLoaders = helper.getModelLoaders(cacheFile);
modelLoaderIndex = 0;
}
}
loadData = null;
boolean started = false;
while(! started && hasNextModelLoader()) { ModelLoader<File, ? > modelLoader = modelLoaders.get(modelLoaderIndex++); loadData = modelLoader.buildLoadData( cacheFile, helper.getWidth(), helper.getHeight(), helper.getOptions());if(loadData ! =null && helper.hasLoadPath(loadData.fetcher.getDataClass())) {
started = true;
/ / (3)
loadData.fetcher.loadData(helper.getPriority(), this); }}return started;
}
Copy the code
As you can see, based on the concerns I marked here, the cache Key is first built, then the cache file is fetched based on the cache Key (to get the transformed image), and finally the cache file is loaded into the required data. Helper.getdiskcache () is DiskLruCacheWrapper, which is operated internally by DiskLruCache, that is, the LRU algorithm is used for this level of disk cache.
Fetcher: ByteBufferFileLoader#(loadData)
/*ByteBufferFileLoader*/
@Override
public void loadData(
@NonNull Priority priority, @NonNull DataCallback<? super ByteBuffer> callback) {
ByteBuffer result;
try {
result = ByteBufferUtil.fromFile(file);
} catch (IOException e) {
if (Log.isLoggable(TAG, Log.DEBUG)) {
Log.d(TAG, "Failed to obtain ByteBuffer for file", e);
}
callback.onLoadFailed(e);
return;
}
callback.onDataReady(result);
}
Copy the code
The main process here is to convert the cache file to ByteBuffer, call back through onDataReady(), and finally call back into the onDataFetcherReady() method of the DecodeJob.
This is the process for getting the cache, so where is the cache stored? We can use the backward method, just get the cache Key when using ResourceCacheKey, then the cache and cache must be using ResourceCacheKey, after the search found that in addition to ResourceCacheGenerator, Only used in the onResourceDecoded() method of DecodeJob:
/*DecodeJob*/
<Z> Resource<Z> onResourceDecoded(DataSource dataSource, @NonNull Resource<Z> decoded) {...booleanisFromAlternateCacheKey = ! decodeHelper.isSourceKey(currentSourceKey);if (diskCacheStrategy.isResourceCacheable(
isFromAlternateCacheKey, dataSource, encodeStrategy)) {
if (encoder == null) {
throw new Registry.NoResultEncoderAvailableException(transformed.get().getClass());
}
final Key key;
/ / (1)
switch (encodeStrategy) {
case SOURCE:
key = new DataCacheKey(currentSourceKey, signature);
break;
case TRANSFORMED:
key =
new ResourceCacheKey(
decodeHelper.getArrayPool(),
currentSourceKey,
signature,
width,
height,
appliedTransformation,
resourceSubClass,
options);
break;
default:
throw new IllegalArgumentException("Unknown strategy: " + encodeStrategy);
}
LockedResource<Z> lockedResult = LockedResource.obtain(transformed);
/ / (2)
deferredEncodeManager.init(key, encoder, lockedResult);
result = lockedResult;
}
return result;
}
Copy the code
Init () is called internally again:
private static class DeferredEncodeManager<Z> {
private Key key;
private ResourceEncoder<Z> encoder;
private LockedResource<Z> toEncode;
<X> void init(Key key, ResourceEncoder<X> encoder, LockedResource<X> toEncode) {
this.key = key;
this.encoder = (ResourceEncoder<Z>) encoder;
this.toEncode = (LockedResource<Z>) toEncode;
}
void encode(DiskCacheProvider diskCacheProvider, Options options) {
GlideTrace.beginSection("DecodeJob.encode");
try {
/ / (3)
diskCacheProvider
.getDiskCache()
.put(key, new DataCacheWriter<>(encoder, toEncode, options));
} finally{ toEncode.unlock(); GlideTrace.endSection(); }}}Copy the code
As you can see, depending on the concerns I’ve flagged, we first build different cache keys based on the cache strategy, then call init() of DeferredEncodeManager to assign the variable Key, and then the Key is used in encode(), This method does the operation of storing the cache (store the transformed image).
DecodeJob notifyEncodeAndRelease DecodeJob notifyEncodeAndRelease DecodeJob notifyEncodeAndRelease DecodeJob notifyEncodeAndRelease DecodeJob notifyEncodeAndRelease DecodeJob
/*DecodeJob */
private void notifyEncodeAndRelease(Resource<R> resource, DataSource dataSource) {
if (resource instanceof Initializable) {
((Initializable) resource).initialize();
}
Resource<R> result = resource;
LockedResource<R> lockedResource = null;
if (deferredEncodeManager.hasResourceToEncode()) {
lockedResource = LockedResource.obtain(resource);
result = lockedResource;
}
notifyComplete(result, dataSource);
stage = Stage.ENCODE;
try {
if (deferredEncodeManager.hasResourceToEncode()) {
// Cache resources to diskdeferredEncodeManager.encode(diskCacheProvider, options); }}finally {
if(lockedResource ! =null) { lockedResource.unlock(); }}// Call onEncodeComplete outside the finally block so that it's not called if the encode process
// throws.
onEncodeComplete();
}
Copy the code
NotifyEncodeAndRelease () is the decoding that completes the notification step described in our previous article, that is, the data is requested in SourceGenerator#startNext() on the first load, the decoding is completed, and the cache is stored.
The above has realized the acquisition and storage of transformed images, and the LRU algorithm has automatically realized the rest cleaning operations for us. Let’s take a look at how raw images are retrieved, stored and cleaned.
5.3 Data Sources
This level of cache only caches raw images, so we need to configure the following policy first:
Glide.with(this).load(url).diskCacheStrategy(DiskCacheStrategy.DATA).into(imageView);
Copy the code
It is the same as the resource type, except that the cache policy has been changed to DATA, so we will skip this and look directly at the startNext() method of the DataCacheGenerator:
/*DataCacheGenerator*/
@Override
public boolean startNext(a) {
while (modelLoaders == null| |! hasNextModelLoader()) { .../ / (1)
Key originalKey = new DataCacheKey(sourceId, helper.getSignature());
/ / (2)
cacheFile = helper.getDiskCache().get(originalKey);
if(cacheFile ! =null) {
this.sourceKey = sourceId;
modelLoaders = helper.getModelLoaders(cacheFile);
modelLoaderIndex = 0;
}
}
loadData = null;
boolean started = false;
while(! started && hasNextModelLoader()) { ModelLoader<File, ? > modelLoader = modelLoaders.get(modelLoaderIndex++); loadData = modelLoader.buildLoadData( cacheFile, helper.getWidth(), helper.getHeight(), helper.getOptions());if(loadData ! =null && helper.hasLoadPath(loadData.fetcher.getDataClass())) {
started = true;
/ / (3)
loadData.fetcher.loadData(helper.getPriority(), this); }}return started;
}
Copy the code
As you can see, based on the concerns I marked here, the cache Key is first built, then the cache file is fetched based on the cache Key (to get the raw image), and finally the cache file is loaded into the required data. As with the resource type, helper.getDiskCache() is DiskLruCacheWrapper, so this level of disk cache is also implemented using the LRU algorithm.
Fetcher is also ByteBufferFileLoader, which is also called back to onDataFetcherReady() on the DecodeJob.
So where is the cache stored? The same backwards method was used, but there were two other places to use it besides DataCacheGenerator. The first one is the same as the resource type in DecodeJob#onResourceDecoded() :
/*DecodeJob*/
<Z> Resource<Z> onResourceDecoded(DataSource dataSource, @NonNull Resource<Z> decoded) {...booleanisFromAlternateCacheKey = ! decodeHelper.isSourceKey(currentSourceKey);/ / (1)
if (diskCacheStrategy.isResourceCacheable(
isFromAlternateCacheKey, dataSource, encodeStrategy)) {
if (encoder == null) {
throw new Registry.NoResultEncoderAvailableException(transformed.get().getClass());
}
final Key key;
switch (encodeStrategy) {
case SOURCE:
key = new DataCacheKey(currentSourceKey, signature);
break;
case TRANSFORMED:
key =
new ResourceCacheKey(
decodeHelper.getArrayPool(),
currentSourceKey,
signature,
width,
height,
appliedTransformation,
resourceSubClass,
options);
break;
default:
throw new IllegalArgumentException("Unknown strategy: " + encodeStrategy);
}
LockedResource<Z> lockedResult = LockedResource.obtain(transformed);
deferredEncodeManager.init(key, encoder, lockedResult);
result = lockedResult;
}
return result;
}
Copy the code
The isResourceCacheable() method in DATA is called because the cache policy configured above is DATA:
/*DiskCacheStrategy*/
public static final DiskCacheStrategy DATA =
new DiskCacheStrategy() {
@Override
public boolean isDataCacheable(DataSource dataSource) {
returndataSource ! = DataSource.DATA_DISK_CACHE && dataSource ! = DataSource.MEMORY_CACHE; }// This method is called
@Override
public boolean isResourceCacheable(
boolean isFromAlternateCacheKey, DataSource dataSource, EncodeStrategy encodeStrategy) {
return false;
}
@Override
public boolean decodeCachedResource(a) {
return false;
}
@Override
public boolean decodeCachedData(a) {
return true; }};Copy the code
As you can see, the isResourceCacheable() method always returns false, so concern (1) above is not accessible and can be ruled out.
Let’s move on to another one:
/*SourceGenerator*/
private void cacheData(Object dataToCache) {
long startTime = LogTime.getLogTime();
try {
Encoder<Object> encoder = helper.getSourceEncoder(dataToCache);
DataCacheWriter<Object> writer =
new DataCacheWriter<>(encoder, dataToCache, helper.getOptions());
/ / (1)
originalKey = new DataCacheKey(loadData.sourceKey, helper.getSignature());
/ / (2)helper.getDiskCache().put(originalKey, writer); . }finally {
loadData.fetcher.cleanup();
}
sourceCacheGenerator =
new DataCacheGenerator(Collections.singletonList(loadData.sourceKey), helper, this);
}
Copy the code
Here you first build the cache Key and then store the cache (storing the raw image). This method is called in SourceGenerator#startNext() :
/*SourceGenerator*/
@Override
public boolean startNext(a) {
/ / (1)
if(dataToCache ! =null) {
Object data = dataToCache;
dataToCache = null;
cacheData(data);
}
if(sourceCacheGenerator ! =null && sourceCacheGenerator.startNext()) {
return true;
}
sourceCacheGenerator = null;
loadData = null;
boolean started = false;
while(! started && hasNextModelLoader()) { loadData = helper.getLoadData().get(loadDataListIndex++);if(loadData ! =null
&& (helper.getDiskCacheStrategy().isDataCacheable(loadData.fetcher.getDataSource())
|| helper.hasLoadPath(loadData.fetcher.getDataClass()))) {
started = true; startNextLoad(loadData); }}return started;
}
Copy the code
SourceGenerator#onDataReadyInternal(); SourceGenerator#onDataReadyInternal();
/*SourceGenerator*/
void onDataReadyInternal(LoadData
loadData, Object data) {
DiskCacheStrategy diskCacheStrategy = helper.getDiskCacheStrategy();
if(data ! =null && diskCacheStrategy.isDataCacheable(loadData.fetcher.getDataSource())) {
/ / assignment
dataToCache = data;
// We might be being called back on someone else's thread. Before doing anything, we should
// reschedule to get back onto Glide's thread.
/ / callback
cb.reschedule();
} else{ cb.onDataFetcherReady( loadData.sourceKey, data, loadData.fetcher, loadData.fetcher.getDataSource(), originalKey); }}Copy the code
As you can see, the onDataReadyInternal() method is the familiar one that was called in the previous article after loading the data. The last article went else because caching was disabled. The cache policy configured here is DATA, so the natural path is if.
The reschedule() method of EngineJob is called:
/*EngineJob*/
@Override
public void reschedule(DecodeJob
job) {
getActiveSourceExecutor().execute(job);
}
Copy the code
The DecodeJob is executed again with the thread pool, so we end up back with SourceGenerator’s startNext() method, at which point the dataToCache is not empty, so the data is cached. The cacheData() method also builds a DataCacheGenerator to store the cache, and then executes DataCacheGenerator#startNext(), fetching the cache from disk before displaying the image on the control. This means that the network request gets the data and caches it, then retrieves the cache from disk before displaying it on the control.
Similarly, the LRU algorithm automatically helps us to clean the original image.
It can be seen that the steps of storing cache in Resource type and Data source are backward inferred by using cache Key to figure out where the Data is cached. Sometimes it’s convenient to make use of backthrusts in some way, but if you’re not comfortable with backthrusts, you can just follow the program, where the network requests the data and then follow it step by step to see where it’s being cached.
Six, summarized
Through the analysis of Glide’s cache mechanism, found that the design is indeed exquisite. Using the four-level cache greatly improves the image loading efficiency, and the disk cache strategy also improves the flexibility of the framework. If we design an image loading framework, we can use the advantages of Glide.
References:
- glide-docs-cn
- Glide the most complete interpretation
About me
I am Wildma, CSDN certified blog expert, excellent author of Simple book programmer, good at screen adaptation. If the article is helpful to you, a “like” is the biggest recognition for me!