The cache in Glide
By default, Glide checks the following levels of cache before starting a new image request:
- Active Resources – Is there another View showing this image right now?
- Memory cache – Has the image been loaded recently and still exists in Memory?
- Resource Type – Has this image been decoded, converted, and written to disk cache before?
- Data Source – Was the resource used to build this image previously written to the file cache?
The first two steps check if the image is in memory, and if so, return the image directly. The last two steps check that the picture is on disk so that it can be returned quickly but asynchronously.
If all four steps fail to find the image, Glide returns to the original resource to retrieve the data (original file, Uri, Url, etc.).
What is tertiary caching?
- Memory cache: Preferentially loading and fastest
- Local cache: second load, fast
- Network cache: The last load, slow speed, waste traffic
Caching mechanisms
Glide uses ActiveResources (active cache weak references) +MemoryCache (Lru algorithm) +DiskCache(Lru algorithm).
-
ActiveResources: Stores images used by the current interface. After the interface is not displayed, the Bitmap is cached in the MemoryCache and deleted from ActiveResources.
-
MemoryCache: Stores unused bitmaps. Once a memoryMap is obtained from the MemoryCache, the Bitmap is cached in ActiveResources and deleted from the MemoryCache.
-
Disk Cache: persistent Cache. For example, if you add rounded corners to an image, the image will be cached in a file and can be used directly when the application is opened again.
ActiveResources+MemoryCache is a runtime cache that is mutually exclusive (the same image is not cached in ActiveResources+MemoryCache at the same time) and will not exist after the application is killed.
Glide is implemented internally using LruCache, weak references, and hard disk caching. Glide mainly divides the cache into two parts: memory cache and hard disk cache. The combination of the two kinds of cache forms the core of Glide cache mechanism.
Why was the active cache designed
Because memory caching uses the LRU algorithm, when you load and display the first image using Gilde, many more images are loaded while your first image is still in use. At this point the memory cache may delete the first photo you are using according to the LRU algorithm. The result is that the photos you are using can’t be found, and the result is that the program crashes.
Loading process
The process is such a process let’s deepen the source code.
Glide source
Loading process
1. The Engine class
Responsible for initiating load and managing active and cached resources, it has a load method inside. Yes, it provides a way to load images through a path.
2. The load method
This method is full of dry goods.
public <R> LoadStatus load(...). {
long startTime = VERBOSE_IS_LOGGABLE ? LogTime.getLogTime() : 0; EngineKey key = keyFactory.buildKey( model, signature, width, height, transformations, resourceClass, transcodeClass, options); EngineResource<? > memoryResource;synchronized (this) {
memoryResource = loadFromMemory(key, isMemoryCacheable, startTime);
if (memoryResource == null) {
return waitForExistingOrStartNewJob(...);
}
}
// Avoid calling back while holding the engine lock, doing so makes it easier for callers to
// deadlock.
cb.onResourceReady(
memoryResource, DataSource.MEMORY_CACHE, /* isLoadedFromAlternateCacheKey= */ false);
return null;
}
Copy the code
3.EngineKey
An in memory only cache key used to multiplex loads.
EngineKey key = keyFactory.buildKey( ...) ;Copy the code
4.loadFromMemory
LoadFromMemory (); loadFromMemory();
5.loadFromActiveResources
6.loadFromCache
7.getEngineResourceFromCache
If it is not found, it means that the image is not saved to the memory cache. I continue to go down, along the source run.
8.waitForExistingOrStartNewJob
Let’s do a simplified version
private <R> LoadStatus waitForExistingOrStartNewJob(...). {
// Manage loaded classes by adding and removing loaded callbacks and notification
// Call back when the load is complete.
// It doesn't matter if I have no data. Eager to look downEngineJob<? > current = jobs.get(key, onlyRetrieveFromCache);if(current ! =null) {
current.addCallback(cb, callbackExecutor);
if (VERBOSE_IS_LOGGABLE) {
logWithTimeAndKey("Added to existing load", startTime, key);
}
return new LoadStatus(cb, current);
}
// Same as above, then look down
EngineJob<R> engineJob =
engineJobFactory.build(
key,
isMemoryCacheable,
useUnlimitedSourceExecutorPool,
useAnimationPool,
onlyRetrieveFromCache);
// DecodeJob is a class that decodes resources from cached data or raw sources
// Apply conversion and code conversion.
DecodeJob<R> decodeJob =
decodeJobFactory.build(
...
engineJob);
jobs.put(key, engineJob);
engineJob.addCallback(cb, callbackExecutor);
engineJob.start(decodeJob);
if (VERBOSE_IS_LOGGABLE) {
logWithTimeAndKey("Started new load", startTime, key);
}
return new LoadStatus(cb, engineJob);
}
Copy the code
9.DecodeJob
class DecodeJob<R>
implements DataFetcherGenerator.FetcherReadyCallback.Runnable.Comparable<DecodeJob<? > >,Poolable {}...// The constructor has a DiskCacheProvider that is related to disk cachingDecodeJob(DiskCacheProvider diskCacheProvider, Pools.Pool<DecodeJob<? >> pool) {this.diskCacheProvider = diskCacheProvider;
this.pool = pool; }...Copy the code
10.DiskCacheProvider
Entry to the disk caching implementation.
Created in the specified memory based on {@ link com. Bumptech. Glide. Disklrucache. Disklrucache} the disk cache.
Disk cache directory.
public class DiskLruCacheFactory implements DiskCache.Factory {
private final long diskCacheSize;
private final CacheDirectoryGetter cacheDirectoryGetter;
/** Call the interface outside the UI thread to get the cache folder. * /
public interface CacheDirectoryGetter {
File getCacheDirectory(a);
}
public DiskLruCacheFactory(final String diskCacheFolder, long diskCacheSize) {
this(
new CacheDirectoryGetter() {
@Override
public File getCacheDirectory(a) {
return new File(diskCacheFolder);
}
},
diskCacheSize);
}
public DiskLruCacheFactory(
final String diskCacheFolder, final String diskCacheName, long diskCacheSize) {
this(
new CacheDirectoryGetter() {
@Override
public File getCacheDirectory(a) {
return new File(diskCacheFolder, diskCacheName);
}
},
diskCacheSize);
}
/** * when this constructor is used, {@linkCacheDirectoryGetter#getCacheDirectory()} *UI thread that allows I/O access without affecting performance. * Calls outside the UI thread@paramThe cacheDirectoryGetter interface to get the cache folder. *@paramDiskCacheSize LRU Maximum size in bytes required by the disk cache. * /
// Public API.
@SuppressWarnings("WeakerAccess")
public DiskLruCacheFactory(CacheDirectoryGetter cacheDirectoryGetter, long diskCacheSize) {
this.diskCacheSize = diskCacheSize;
this.cacheDirectoryGetter = cacheDirectoryGetter;
}
@Override
public DiskCache build(a) {
File cacheDir = cacheDirectoryGetter.getCacheDirectory();
if (cacheDir == null) {
return null;
}
if (cacheDir.isDirectory() || cacheDir.mkdirs()) {
return DiskLruCacheWrapper.create(cacheDir, diskCacheSize);
}
return null; }}Copy the code
11.DiskCache.Factory
DiskLruCacheFactory implements the interface of DiskLruCacheFactory
/** Interface for writing data to and reading data from disk cache */
public interface DiskCache {
/** Interface for creating disk cache */
interface Factory {
/** 250 MB of cache. */
int DEFAULT_DISK_CACHE_SIZE = 250 * 1024 * 1024;
String DEFAULT_DISK_CACHE_DIR = "image_manager_disk_cache";
/** Returns a new disk cache, or {if a disk cache cannot be created.@code null}*/
@Nullable
DiskCache build(a);
}
/** The interface that actually writes data to the key in the disk cache */
interface Writer {
/** * Writes data to a file * Returns false if the write should be aborted. *@paramFile Indicates the file that the program should write to. * /
boolean write(@NonNull File file);
}
/** * retrieves the cache of values at the given key. * /
@Nullable
File get(Key key);
/ * * *@paramKey Indicates the key to be written. *@paramWriter An interface that writes data given a key output stream. * /
void put(Key key, Writer writer);
/** * removes keys and values from the cache. . * /
@SuppressWarnings("unused")
void delete(Key key);
/** Clear the cache. */
void clear(a);
}
Copy the code
Disk cache write and read interface has, that other associated source code to find and try to understand is no problem, again find chaos.
What is the LRU
LRU is the least recently used algorithm (cache elimination algorithm). The core idea of LRU is that when the cache is full, the least recently used cache objects are preferentially eliminated. There are two kinds of cache using LRU algorithm: LrhCache and DisLruCache, which are used to implement memory cache and hard disk cache respectively. Their core ideas are LRU cache algorithm.
The core idea of LruCache is to maintain a list of cached objects in order of access, that is, objects that have not been accessed are placed at the end of the queue and are about to be eliminated. The most recently visited object will be placed at the head of the queue and eliminated at the end.
LRU of memory cache
/** An LRU in memory cache for {@link com.bumptech.glide.load.engine.Resource}s. */
public class LruResourceCache extends LruCache<Key.Resource<? >>implements MemoryCache {
private ResourceRemovedListener listener;
/** *LruResourceCache constructor. *@paramSize Maximum size in bytes that the memory cache can use. * /
public LruResourceCache(long size) {
super(size);
}
@Override
public void setResourceRemovedListener(@NonNull ResourceRemovedListener listener) {
this.listener = listener;
}
@Override
protected void onItemEvicted(@NonNull Key key, @NullableResource<? > item) {
if(listener ! =null&& item ! =null) { listener.onResourceRemoved(item); }}@Override
protected int getSize(@NullableResource<? > item) {
if (item == null) {
return super.getSize(null);
} else {
returnitem.getSize(); }}@SuppressLint("InlinedApi")
@Override
public void trimMemory(int level) {
if (level >= android.content.ComponentCallbacks2.TRIM_MEMORY_BACKGROUND) {
// A list of cached backend applications is being entered
// Exit our entire Bitmap cache
clearMemory();
} else if (level >= android.content.ComponentCallbacks2.TRIM_MEMORY_UI_HIDDEN
|| level == android.content.ComponentCallbacks2.TRIM_MEMORY_RUNNING_CRITICAL) {
// The app's UI is no longer visible, or app is in the foreground but system is running
// critically low on memory
// Evict oldest half of our bitmap cache
trimToSize(getMaxSize() / 2); }}}Copy the code
LruCache
There is a LinkedHashMap to hold the data, and the LRU(least used algorithm) cache policy is implemented.
Map<T,Y> cache = new LinkedHashMap<>(100.0.75 f.true) :Copy the code
- The second parameter, 0.75f, represents the loading factor, which temporarily doubles memory at 75% capacity.
- The last and most important parameter is the order in which the elements are accessed, true by access order, false by failed insert order.
Implementation principle of LruCache
Take advantage of the LinkedHashMap sort feature: because of the access order sort, the elements that get/put are placed at the end of the Map. So when the last element is inserted, if the current cache size exceeds the maximum, the preceding element in the Map is removed.
Four Java references
1. StrongReference
- Use the most common references.
- Strong references do not break as long as the chain of references is not broken. – An OutOfMemoryError terminator that throws an OutOfMemoryError does not reclaim objects with strong references when there is insufficient memory.
- Weaken the reference by setting the object to NULL so that it can be reclaimed
Object object = new Object();
String str = "scc";
// Both are strong references
Copy the code
2. SoftReference
- The object is in a useful but not necessary state
- The GC reclaims the memory of the referenced object only when there is insufficient memory.
- Can be used to implement caching – such as web page caching, image caching
// Note that WRF is also a strong reference to SoftReference,
// SoftReference refers to a reference to a new String(" STR "), that is, T in the SoftReference class
SoftReference<String> wrf = new SoftReference<String>(new String("str"));
Copy the code
3. WeakReference
A weak reference is one that is collected as soon as the JVM garbage collector finds it.
- Non-required object, weaker than soft reference
- It is called back during GC
- The probability of being collected is also low because of the low priority of the GC thread
- This is useful for referencing objects that are used occasionally and do not affect garbage collection
Use:
Map<Key, ResourceWeakReference> activeEngineResources = new HashMap<>();
/ / ResourceWeakReference weak references
Copy the code
4. PhantomReference
- Does not determine the life cycle of the object
- It can be collected by the garbage collector at any time
- Act as a sentinel by tracking the activity of objects that are collected by the garbage collector
- Must be used in conjunction with the ReferenceQueue ReferenceQueue
When the garbage collector is about to reclaim an object and finds that it has a virtual reference, it adds the virtual reference to the reference queue associated with it.
A program can determine whether a referenced object is about to be garbage collected by determining whether a virtual reference has been added to the reference queue. If the program finds that a virtual reference has been added to the reference queue, it can take the necessary action before the memory of the referenced object is reclaimed.
Object obj = new Object();
ReferenceQueue queue = new ReferenceQueue();
PhantomReference reference = new PhantomReference(obj, queue);
// The strong reference object is empty and the soft reference is reserved
obj = null;
Copy the code
ReferenceQueue (ReferenceQueue)
- There is no actual storage structure and storage logic depends on the relationship between internal nodes to express
- Stores associated soft, weak, and virtual references that are GC bound