Data cache

Image data starts at the remote data server and is not displayed in the View of the client interface until it is loaded into memory. In order to speed up the data processing speed, it is necessary to save the processed data in each step, and use it directly to save time when needed again. It is for caching.

Caching strategies

Data caching needs to provide an appropriate caching strategy for two reasons:

  1. Space is limited: The amount of memory allocated by the APP and the amount of space taken up in the file system are both limited, depending on the policies of the operating system;
  2. When the space is insufficient, the existing data needs to be deleted. The probability and frequency of data being used again are different.

Classification of cache

According to the above figure, the cache can be divided into the following according to different locations:

  1. Network cache: Also known as cloud cache, you need to purchase services from cloud service providers, such as CDN.
  2. Disk cache: Downloads files from the cloud to the local file system and saves them in the file system.
  3. Memory cache: Data needs to be loaded into the memory before use, which has a smaller memory space than the file system.

In the figure, there is also a partial client network framework, usually network framework also provides file system cache and memory cache, but they are essentially in the same APP running instance, memory and file system space is shared, there is no need to use at the same time, pay attention to handling.

Cache tool

Each of the following sections has tools such as BaseImageDownloader and BaseImageDecoder that play an important role in connecting to the framework, as well as providing image data download and loading services as separate utility classes.

The file cache

Load to the file system

  • BaseImageDownloader

    Universal image-loader supports BaseImageDownloader to dump image resources from multiple data sources into the cache directory:

    Private Boolean downloadImage() throws IOException {// Get the data stream InputStream is = getDownloader().getStream(URI, options.getExtraForDownloader()); if (is == null) { L.e(ERROR_NO_IMAGE_STREAM, memoryCacheKey); return false; } else {try {/ / return to the cache directory saved configuration. The diskCache. Save (uri, is, this); } finally { IoUtils.closeSilently(is); }}}Copy the code
    @Override public InputStream getStream(String imageUri, Object extra) throws IOException { switch (Scheme.ofUri(imageUri)) { case HTTP: case HTTPS: return getStreamFromNetwork(imageUri, extra); case FILE: return getStreamFromFile(imageUri, extra); case CONTENT: return getStreamFromContent(imageUri, extra); case ASSETS: return getStreamFromAssets(imageUri, extra); case DRAWABLE: return getStreamFromDrawable(imageUri, extra); case UNKNOWN: default: return getStreamFromOtherSource(imageUri, extra); }}Copy the code
  • BaseImageDownloader extension

    BaseImageDownloader provides an interface for each resource type

    Get Http/Https data using the Java API provided by the network interface HttpURLConnection, if you need to use other network frameworks such as Okhttp, You can re-implement getStreamFromNetwork() by inherits BaseImageDownloader and set an instance of ImageLoaderConfiguration#downloader.

File system cache policy

The file cache class diagram structure is relatively simple, and provides two types of cache infinite capacity cache and finite capacity cache. Two recycling mechanisms are provided for capacity control.

  1. LimitedAgeDiskCache: Controls cache space reclamation by cache duration.
  2. LruDiskCache: controls cache space reclamation using the least recently used algorithm.

The Universal image-loader framework uses UnlimitedDistCache: ImageLoaderConfiguration#diskCache by default

Memory cache

Load into memory

The default value provided for ImageLoaderConfiguration#decoder is BaseImageDecoder, The image processing parameter decodingOptions comes from the configuration parameter DisplayImageOptions#decodingOptions.

Memory caching strategy

The image above shows the class diagram of the MemoryCache module, with the orange area as the core. The memoryCache strategy used by the framework runtime is specified by the ImageLoaderConfiguration#memoryCache parameter. The default is LruMemoryCache

  • Two basic schemes

    1. LruMemoryCache: LinkedHashMap structure with capacity control, set accessOrder = true, and recycle policy using least recently used.
    2. WeakMemoryCache: Synchronous HashMap with no capacity limitation. Bitmap data uses weak reference mode to hold image data.
  • LimitedMemoryCache extension scheme

    LimitedMemoryCache is a set of synchronous HashMap with capacity control. Bitmap data uses weak reference mode to hold image data. The overall effect is that on the basis of adding capacity control to WeakMemoryCache, different reclamation strategies are given when space is insufficient. This set of classes, given different collection strategies, adds extra storage space to assist in finding the next priority to be collected.

    1. FIFOLimitedMemoryCache: first in, first out, queue. When the remaining space is insufficient, the data of the team head is reclaimed first.
    2. LRULimitedMemoryCache: Least recently used, LinkedHashMap set accessOrder = true, calculate space is running out of the current row will be reclaimed data. Consistent with the LruMemoryCache algorithm.
    3. UsingFreqLimitedMemoryCache: use, at least usingCounts record each data using the number of times, when the space is insufficient, the least used data will be recovered.
    4. LargestLimitedMemoryCache: first out biggest, valueSizes record the size of each data, when the space is insufficient, take up the space is the biggest is recycled.
  • Two wrapper classes

    1. LimitedAgeMemoryCache: Add additional “Duration” control logic, set the maximum retention time (seconds), determine whether the cache exceeds the maximum allowed time range when data is accessed, and delete the cache if it exceeds the maximum allowed time range.
    2. FuzzyKeyMemoryCache: Set a Comparator instance that deletes data that meets the algorithm’s requirements before adding data and then inserts new data.