preface

When we used third party frameworks, glide, OKHTTP, etc., it was easy to use because we just used it. Ignoring the caching technology used. Let’s understand the nature of caching. The memory cache usually now uses the LruCache cache and the disk cache is DiskLruCache. They all use the LRU algorithm (least recently used). The network cache is, of course, the network request, and the data is stored in the back-end database.

LRU algorithm: Least Recently Used indicates the Least recent use. At the time of the cached data, if the data does not exist in the cache, is in the cache, such as exists in the cache, buffer will be back into the head position, said the recent use, the bottom is least recently used, under some configuration, if, under the capacity of the cache, then to cache the new data, the need to start from the bottom of least recently used to delete the cache.

See here, in fact, LRU also has his disadvantages. For example, in such a scenario, the general e-commerce will have a “kill day”, so once the “kill day” occurs, the real potential frequently accessed cache may be deleted due to the cache capacity problem on that day. How to optimize, in fact, there is a way, LRU+2Q. It’s a little off topic, so if you’re interested, you can find out for yourself.

Now, follow my rhythm, and we’ll take the DiskLruCache apart. Of course, you need to download a DiskLruCache source code. Click here – source code address; Mobike JakeWharton

Create DiskLruCache

DiskLruCache mDiskLruCache = null;
try {
    File cacheDir = getDiskCacheDir(this."bitmap");
    if(! cacheDir.exists()) { cacheDir.mkdirs(); } mDiskLruCache = DiskLruCache.open(cacheDir, getVersionCode(this), 1.10 * 1024 * 1024);
} catch (IOException e) {
    e.printStackTrace();
}
Copy the code

After the above, we open the cache file of the mobile phone, because the Android phone is too miscellaneous, you can find /sdcard/Android/data/ project package name /cache/bitmap/ from this path. For internal storage, you can go to Android/data/ project package name /cache/bitmap/ to see a journal file in this folder. After we cover Open, we’ll cover the Journal file

As you can see, DiskLruCahe is created with open and contains four parameters. Let’s look at the source code. The source code is annotated directly, and I will translate it directly into Chinese

    2: appVersion Version of the current APP, that is, versionCode * 3: valueCount Number of caches for a cache key. Parameter 4: maxSize Maximum cache capacity */
    public static DiskLruCache open(File directory, int appVersion, int valueCount, long maxSize)
            throws IOException {
        if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0");
        }
        if (valueCount <= 0) {
            throw new IllegalArgumentException("valueCount <= 0");
        }
        
        // New out the current file, that is, assign values to member variables, such as directory, appVersion, valueCount, journalFile, etc
        DiskLruCache cache = new DiskLruCache(directory, appVersion, valueCount, maxSize);
        
        // Why is the method called open? Look at the following judgment.
        // If the parameter we pass is the same as the original cache file, then return the original cache file and concatenate it
        
        
        // If the cache file journal exists
        if (cache.journalFile.exists()) {
            try {
                // Check whether the parameters in the original cache file are the same as those in the current assignment, and throw an exception if they are not
                // Also read the log information into our cache LinkedHashMap
      
        lruEntries will be placed in [1.1, readJournal()] for further explanation
      ,>
                cache.readJournal();
                // The file that was in the header when it was last read is DIRTY, that is, it has not been written to the cache. It will be explained in detail in [1.2, processJournal()]
                cache.processJournal();
                New FileOutputStream(file, append) is called internally and the output stream is concatenated on the original file
                // If this parameter is false, the original file will be cleared and rewritten. This is how breakpoint downloading works. Those who are interested can find out for themselves
                cache.journalWriter = new BufferedWriter(new FileWriter(cache.journalFile, true),
                        IO_BUFFER_SIZE);
                return cache;
            } catch (IOException journalIsCorrupt) {
              If an exception is thrown, delete the original cache file and create a new cache file belowcache.delete(); }}// If this step is reached, the original cache is not returned. The original cache file is deleted from the catch file, so the new parameter is used to generate the cache file
        directory.mkdirs();
        cache = new DiskLruCache(directory, appVersion, valueCount, maxSize);
        // rebuildJournal(); Detailed interpretation of the
        cache.rebuildJournal();
        return cache;
    }
Copy the code


// Get the cache path, which is already determined if there is external storage
public File getDiskCacheDir(Context context, String uniqueName) {
    String cachePath;
    if(Environment.MEDIA_MOUNTED.equals(Environment.getExternalStorageState()) || ! Environment.isExternalStorageRemovable()) { cachePath = context.getExternalCacheDir().getPath(); }else {
        cachePath = context.getCacheDir().getPath();
    }
    return new File(cachePath + File.separator + uniqueName);
}


// Get the current app version number versionCode
public  int getVersionCode(Context context) {
    PackageManager packageManager = context.getPackageManager();
    PackageInfo packageInfo;
    int versionCode = 1;
    try {
        packageInfo = packageManager.getPackageInfo(context.getPackageName(), 0);
        versionCode = packageInfo.versionCode;
    } catch (PackageManager.NameNotFoundException e) {
        e.printStackTrace();
    }
    return versionCode;
}

Copy the code


1.1, readJournal ()

private void readJournal(a) throws IOException {
    // Read the journal file through the input stream
    InputStream in = new BufferedInputStream(new FileInputStream(journalFile), IO_BUFFER_SIZE);
    try {
        // Compare the parameters in the original journal file with those now passed in through open, and throw an exception if the parameters are inconsistent
        String magic = readAsciiLine(in);
        String version = readAsciiLine(in);
        String appVersionString = readAsciiLine(in);
        String valueCountString = readAsciiLine(in);
        String blank = readAsciiLine(in);
        if(! MAGIC.equals(magic) || ! VERSION_1.equals(version) || ! Integer.toString(appVersion).equals(appVersionString) || ! Integer.toString(valueCount).equals(valueCountString) || !"".equals(blank)) {
            throw new IOException("unexpected journal header: ["
                    + magic + "," + version + "," + valueCountString + "," + blank + "]");
        }

        while (true) {
            try {
                // Read the cached log information into our cache LinkedHashMap
      
        lruEntries
      ,>
                // REMOVE the header from the log, which is redundant
                readJournalLine(readAsciiLine(in));
            } catch (EOFException endOfJournal) {
                break; }}}finally{ closeQuietly(in); }}Copy the code


Read the readJournalLine() source code again

private void readJournalLine(String line) throws IOException {
    String[] parts = line.split("");
    if (parts.length < 2) {
        throw new IOException("unexpected journal line: " + line);
    }

    String key = parts[1];
    // This is the cache that has been removed from the cache list
    if (parts[0].equals(REMOVE) && parts.length == 2) {
        lruEntries.remove(key);
        return;
    }
   
    Entry entry = lruEntries.get(key);
    if (entry == null) {
        entry = new Entry(key);
        lruEntries.put(key, entry);
    }
    
    // The read header is CLEAN, indicating some information about the successfully cached data. And assign values to the cache list
    // The cache list contains an Entry object, whose clauses are internal classes defined by DiskLruCache to store cached information.
    if (parts[0].equals(CLEAN) && parts.length == 2 + valueCount) {
        entry.readable = true;
        entry.currentEditor = null;
        entry.setLengths(copyOfRange(parts, 2, parts.length));
    } else if (parts[0].equals(DIRTY) && parts.length == 2) {
        entry.currentEditor = new Editor(entry);
    } else if (parts[0].equals(READ) && parts.length == 2) {
        // this work was already done by calling lruEntries.get()
    } else {
        throw new IOException("unexpected journal line: "+ line); }}Copy the code

At this point, you might be wondering if the journal log file is getting bigger and bigger, even if you keep caching. To quote Guo Shen

If I keep doing this more and more, and I keep writing to the Journal file, wouldn’t the journal file get bigger and bigger? DiskLruCache uses a variable redundantOpCount (redundantOpCount) to count the number of user operations. It adds one on every write, read, or cache removal. When it reaches 2000, the journal reconstruction event is triggered. At this time, some redundant and unnecessary records in the Journal will be automatically removed to ensure that the size of the Journal file is always kept within a reasonable range.

Then I read the source code carefully and realized that every get(), completeEdit(), and remove operation calls a line of this code

if (journalRebuildRequired()) {
    executorService.submit(cleanupCallable);
}
Copy the code

So look at journalRebuildRequired(), which is very literal for the guidance of whether or not the Jornal file needs to be rebuilt

// Look at the source code and guo God said the same
private boolean journalRebuildRequired(a) {
    final int REDUNDANT_OP_COMPACT_THRESHOLD = 2000;
    return redundantOpCount >= REDUNDANT_OP_COMPACT_THRESHOLD
            && redundantOpCount >= lruEntries.size();
}
Copy the code

It’s not hard to imagine what happens in cleanupCallable, which not only determines whether the journal file needs to be rebuilt, but also determines whether the cache has exceeded the maximum cache and has been cleaned. Set redundantOpCount to 0

if (journalRebuildRequired()) {
    rebuildJournal();
    redundantOpCount = 0;
}
Copy the code


1.2, processJournal ()

ProcessJournal () is executed after readJournal() is executed. At this point all cache file information has been read into the cache list LinkedHashMap<String, Entry> lruEntries.

ProcessJournal () is a process that removes the DIRTY data from the cache that has not been cached successfully. (If the data is successfully cached, there are two lines of the cache log with the headers DIRTY and CLEAN respectively. The DIRTY and REMOVE header information has two lines respectively. I will make it clear in the explanation of journal, now follow the “feel” to go on) source code

private void processJournal(a) throws IOException {
    JournalFileTmp: journalFileTmp: journalFileTmp: journalFileTmp: journalFileTmp: journalFileTmp: journalFileTmp: journalFileTmp
    // journalFileTmp is no longer needed because rebuildJournal() is not used to rebuildJournal files
    deleteIfExists(journalFileTmp);
    for (Iterator<Entry> i = lruEntries.values().iterator(); i.hasNext(); ) {
        Entry entry = i.next();
        // it is already cached, so cache size++
        if (entry.currentEditor == null) {
            for (int t = 0; t < valueCount; t++) { size += entry.lengths[t]; }}else {
         // It is not cached, so clean up the dirty data
            entry.currentEditor = null;
            for (int t = 0; t < valueCount; t++) { deleteIfExists(entry.getCleanFile(t)); deleteIfExists(entry.getDirtyFile(t)); } i.remove(); }}}Copy the code


1.3, rebuildJournal ()

This is in the open () method, where the arguments are inconsistent and you end up with a journal file reconstruction

private synchronized void rebuildJournal(a) throws IOException {
    if(journalWriter ! =null) {
        journalWriter.close();
    }
    
    // This is the header of the journal. You can also refer to it as parameter information
    Writer writer = new BufferedWriter(new FileWriter(journalFileTmp), IO_BUFFER_SIZE);
    writer.write(MAGIC);
    writer.write("\n");
    writer.write(VERSION_1);
    writer.write("\n");
    writer.write(Integer.toString(appVersion));
    writer.write("\n");
    writer.write(Integer.toString(valueCount));
    writer.write("\n");
    writer.write("\n");
    
    // Write the log information that has been cached successfully
    for (Entry entry : lruEntries.values()) {
        if(entry.currentEditor ! =null) {
            writer.write(DIRTY + ' ' + entry.key + '\n');
        } else {
            writer.write(CLEAN + ' ' + entry.key + entry.getLengths() + '\n');
        }
    }

    writer.close();
    / / note that here first write data in the temporary files journalFileTmp, finally renamed journalFile
    // This temporary file is redundant. I looked at the code, and that's where it came in. JournalFile comes first
    JournalFileTmp = journalFileTmp = journalFileTmp = journalFileTmp = journalFileTmp = journalFileTmp I think it's ok. Maybe god wants to do all the initialization work
    // Put it in open()
    journalFileTmp.renameTo(journalFile);
    journalWriter = new BufferedWriter(new FileWriter(journalFile, true), IO_BUFFER_SIZE);
}
Copy the code


Ii. Journal file

I was going to put this at the end, around the finishing touch. But the front mentioned so much, if there are friends who see this, in fact, this point in the eyes of the dragon is the most appropriate. Connecting the preceding and the following throughout the text.

With the above creation, we open the phone and see the Journal file generated.

Because I am new, there is no data stored in it, so the log information is incomplete. Plus the cache information, that’s the third step. So the address of the source code, god is also very clear notes. As long as we know the operation cache, the operation information will be recorded in the Journal file.

/* * libcore.io.DiskLruCache * 1 * 1 * 1 * * CLEAN 3400330d1dfc7f3f7f4b8d4d803dfcf6 832 21054 * DIRTY 335c4c6028171cfddfbaae1a9c313c52 * CLEAN 335c4c6028171cfddfbaae1a9c313c52 3934 2342 * REMOVE 335c4c6028171cfddfbaae1a9c313c52 * DIRTY 1ab96a171faeeee38496d8b330771a7a * CLEAN 1ab96a171faeeee38496d8b330771a7a 1600 234 * READ 335c4c6028171cfddfbaae1a9c313c52 * READ 3400330d1dfc7f3f7f4b8d4d803dfcf6 */
Copy the code
  • The first line: libcore.io.DiskLruCache indicates that we are using the DiskLruCache technology
  • Line 2:1 The version number of DiskLruCache, which is always 1
  • Line 3: the version number of the app, which corresponds to the value we passed in from Open
  • Line 4:1 is valueCount, if it is 1 then our key and cache are one-to-one
  • Line 5: blank line, the first five lines are called the journal file header

DIRTY header: This is currently a piece of DIRTY data that is being cached but has not been cached successfully. This is followed by the key value, which is obtained through THE MD5 encryption of the image URL address because one to one correspondence is required and symbols are not allowed

CLEAN header: the cache data corresponding to the key value has been successfully cached. ValueCount is set to 2. One key corresponds to multiple caches

REMOVE header: cache data corresponding to the key value is deleted

READ header: the cached data corresponding to the key value is used

Each DIRTY key should be followed by a CLEAN or REMOVE key. Otherwise, the data is “DIRTY” and will be automatically deleted. We talked about that in summary 1. That’s all for journal. If you are not interested in the following information, you can skip it

At this point, there may be a question, why is key still one-to-many? There’s a little bit of a point here, and if you combine that point, maybe you’ll get it. That’s right, hash collision.

What is a hash collision? To store data, we first need to convert the stored key value into a token that the system recognizes. For example, the ASCII code for both the letter ‘A’ and the number 97 is 97, which translates to binary: 01100001. So you have hash collisions, where the key evaluates to the same hashcode.

If the hash collision problem is solved, can’t the same key correspond to multiple values? There are a number of ways to solve this problem, but here we are talking about the zipper method: if the key is hashed, it does not correspond to a value, but to the address of a single necklace table. Let’s take a look at its data structure:

static class Entry<K.V> implements Map.Entry<K.V> {
        final K key;
        V value;
        Entry<K,V> next;
        int hash;...}
Copy the code

The single item list is next,next, if there’s a collision, save it. When we fetch a value from a key, which corresponds to a hash collision, we fetch a single-column table, and then iterate (because the data structure has a specific key value, the key value corresponds to) to get the final value. Because it is traversal, so it is not suitable to save too long. If the size reaches 8, it will be converted into a red-black tree to improve the query efficiency.


Write cache

3.1, DiskLruCache. Editor

The cache is written to by DiskLruCache.Editor, followed by a key that is written to the cache. We just need to make sure that there is no special symbol and that the key is unique

try {
    String imageUrl = "https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/54a4b40807034b3090708c935689345f~tplv-k3u1fbpfcp-zoom-crop-mark:1304: 1304:1304:734.awebp?";
    String key = hashKeyForDisk(imageUrl);
    DiskLruCache.Editor editor = mDiskLruCache.edit(key);
} catch (IOException e) {
    e.printStackTrace();
}


public String hashKeyForDisk(String key) {
	String cacheKey;
	try {
		final MessageDigest mDigest = MessageDigest.getInstance("MD5");
		mDigest.update(key.getBytes());
		cacheKey = bytesToHexString(mDigest.digest());
	} catch (NoSuchAlgorithmException e) {
		cacheKey = String.valueOf(key.hashCode());
	}
	return cacheKey;
}
 
private String bytesToHexString(byte[] bytes) {
	StringBuilder sb = new StringBuilder();
	for (int i = 0; i < bytes.length; i++) {
		String hex = Integer.toHexString(0xFF & bytes[i]);
		if (hex.length() == 1) {
			sb.append('0');
		}
		sb.append(hex);
	}
	return sb.toString();
}

Copy the code


So let’s see what edit does.

private synchronized Editor edit(String key, long expectedSequenceNumber) throws IOException {
    // Check whether the journal file output stream is closed. If so, an error is thrown
    checkNotClosed();
    // Check whether the key value is valid, if null character or newline, throw an error (MD5 encryption is recommended, no special symbol)
    validateKey(key);
    Entry entry = lruEntries.get(key);
    
    ExpectedSequenceNumber if the code was called edit() by DiskLruCache, the test would not be performed because ANY_SEQUENCE_NUMBER is passed to the expectedSequenceNumber.
    Snapshot is stale A snapshot is stale. Scroll through the snapshot inner class to see this code
    /** * *public Editor edit() throws IOException { * return DiskLruCache.this.edit(key, sequenceNumber); *} * /
    // look at sequenceNumber, which means sequenceNumber, and look at the source code. It will be ++ in completeEdit ();
    // The Editor object obtained through the snapshot edit() method is to see if it has been modified
    if(expectedSequenceNumber ! = ANY_SEQUENCE_NUMBER && (entry ==null|| entry.sequenceNumber ! = expectedSequenceNumber)) {return null; // snapshot is stale
    }
    if (entry == null) {
        // If the key does not exist in the cache, create a new key and store it in the cache
        entry = new Entry(key);
        lruEntries.put(key, entry);
    } else if(entry.currentEditor ! =null) {
        // If the currently cached Editor object exists, the cache is in use.
        return null; // another edit is in progress
    }

    
    Editor editor = new Editor(entry);
    entry.currentEditor = editor;

    // flush the journal before creating files to prevent file leaks
    // Add a header DIRTY to the journal for the data being cached
    journalWriter.write(DIRTY + ' ' + key + '\n');
    journalWriter.flush();
    return editor;
}
Copy the code


3.2, editor. NewOutputStream (0);

We get the OutputStream here, and we can store our data in our cache; Remember that there are multiple values for one key? If it’s valueCount=1, one-to-one, then it’s 0; . So first look at the source code below, verify what we said

public OutputStream newOutputStream(int index) throws IOException {
    synchronized (DiskLruCache.this) {
        if(entry.currentEditor ! =this) {
            throw new IllegalStateException();
        }
        return new FaultHidingOutputStream(newFileOutputStream(entry.getDirtyFile(index))); }}Copy the code

Entry. GetDirtyFile (index

// Use the same key. I is used directly to distinguish between different cache values for the same key
public File getDirtyFile(int i) {
    return new File(directory, key + "." + i + ".tmp");
}
Copy the code

With this OutputStream (DiskLruCache has already generated the cache file name for us), we can simply download the input stream and write to the OutputStream. The code is as follows:

new Thread(new Runnable() {
	@Override
	public void run(a) {
		try {
			String imageUrl = "https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/54a4b40807034b3090708c935689345f~tplv-k3u1fbpfcp-zoom-crop-mark:1304: 1304:1304:734.awebp?";
			String key = hashKeyForDisk(imageUrl);
			DiskLruCache.Editor editor = mDiskLruCache.edit(key);
			if(editor ! =null) {
				OutputStream outputStream = editor.newOutputStream(0);
				if (downloadUrlToStream(imageUrl, outputStream)) {
                                        MIT (); //
					editor.commit();
				} else{ editor.abort(); }}// see 3.5, mDiskLruCache. Flush ();
			mDiskLruCache.flush();
		} catch (IOException e) {
			e.printStackTrace();
		}
	}
}).start();

// The basic download code
private boolean downloadUrlToStream(String urlString, OutputStream outputStream) {
	HttpURLConnection urlConnection = null;
	BufferedOutputStream out = null;
	BufferedInputStream in = null;
	try {
		final URL url = new URL(urlString);
		urlConnection = (HttpURLConnection) url.openConnection();
		in = new BufferedInputStream(urlConnection.getInputStream(), 8 * 1024);
		out = new BufferedOutputStream(outputStream, 8 * 1024);
		int b;
		while((b = in.read()) ! = -1) {
			out.write(b);
		}
		return true;
	} catch (final IOException e) {
		e.printStackTrace();
	} finally {
		if(urlConnection ! =null) {
			urlConnection.disconnect();
		}
		try {
			if(out ! =null) {
				out.close();
			}
			if(in ! =null) { in.close(); }}catch (finalIOException e) { e.printStackTrace(); }}return false;
}

Copy the code

So a cache is cached.

3.3, editor.com MIT ();

public void commit(a) throws IOException {
    if (hasErrors) {
        // This calls the completeEdit method we mentioned earlier. False is passed if the cache fails
        completeEdit(this.false);
        // If the cache fails, we remove the key to prevent it from becoming dirty data
        remove(entry.key); // the previous entry is stale
    } else {
        completeEdit(this.true); }}Copy the code

Keep tracking down completeEdit();

private synchronized void completeEdit(Editor editor, boolean success) throws IOException {
    Entry entry = editor.entry;
    // This judgment is not triggered if it is cached normally. Unless caching is done asynchronously, then editor.mit () is not executed
    // If you get another cached Editor object, the editor will not be equal
    if(entry.currentEditor ! = editor) {throw new IllegalStateException();
    }

    // if this edit is creating the entry for the first time, every index must have a value
    // If the value of this key is not cached, then loop once.
    // If the cache file does not exist, the cache fails editor.abourt(), which continues to call false passed in by this method. completeEdit(this, false);
    if(success && ! entry.readable) {for (int i = 0; i < valueCount; i++) {
            if(! entry.getDirtyFile(i).exists()) { editor.abort();throw new IllegalStateException("edit didn't create file "+ i); }}}for (int i = 0; i < valueCount; i++) {
        File dirty = entry.getDirtyFile(i);
        if (success) {
            if (dirty.exists()) {
                // If the cache is successful and the cache file exists, recalculate the cache size
                File clean = entry.getCleanFile(i);
                dirty.renameTo(clean);
                long oldLength = entry.lengths[i];
                longnewLength = clean.length(); entry.lengths[i] = newLength; size = size - oldLength + newLength; }}else {
            // Cache failed. If the cache file exists (possibly incomplete), delete the cache filedeleteIfExists(dirty); }}// The number of times ++ is used to clean up the journal file
    redundantOpCount++;
    entry.currentEditor = null;
    if (entry.readable | success) {
        entry.readable = true;
        // If the cache succeeds, write a line of the CLEAN header in the journal file
        journalWriter.write(CLEAN + ' ' + entry.key + entry.getLengths() + '\n');
        if (success) {
            // If successful, the sequence is ++entry.sequenceNumber = nextSequenceNumber++; }}else {
        // If the cache fails, REMOVE the dirty data and write a line in the journal file with the REMOVE header
        lruEntries.remove(entry.key);
        journalWriter.write(REMOVE + ' ' + entry.key + '\n');
    }
    
    // The last call is to determine whether the cache size exceeds the set maximum cache size, and if so. So clear about caching.
    if(size > maxSize || journalRebuildRequired()) { executorService.submit(cleanupCallable); }}Copy the code

executorService.submit(cleanupCallable); Continue to look at the source code

private final Callable<Void> cleanupCallable = new Callable<Void>() {
    @Override public Void call(a) throws Exception {
        synchronized (DiskLruCache.this) {
            if (journalWriter == null) {
                return null; // closed
            }
                // When the maximum cache is exceeded, the cache is cleared
                trimToSize();
            // See if the journal file needs to be rebuilt
            if (journalRebuildRequired()) {
                rebuildJournal();
                redundantOpCount = 0; }}return null; }};Copy the code

trimToSize();

    private void trimToSize(a) throws IOException {
        while (size > maxSize) {
// Map.Entry
      
        toEvict = lruEntries.eldest();
      ,>
            final Map.Entry<String, Entry> toEvict = lruEntries.entrySet().iterator().next();
            // Cycle from the first, which is the least recently used. REMOVE the current key cache and write the "REMOVE" header in the journal fileremove(toEvict.getKey()); }}Copy the code

Look at the DiskLruCache write cache here, and we’re done. So maybe here you’re asking how exactly is the LRU algorithm implemented? We haven’t covered that yet, but don’t worry, after reading the cache, I think everything will be clear.


4. Read cache

Direct source code

try {
	String imageUrl = "https://gimg2.baidu.com/image_search/src=http%3A%2F%2Fclubimg.club.vmall.com%2Fdata%2Fattachment%2Fforum%2F202107%2F18% 2 f174756ssxumfsxiagrlg5u.png&refer=http%3a%2f%2fclubimg.club.vmall.com & app = 2002 & size = f9999, 10000 & q = a80 & n = 0 & g = 0 n & FMT = jpeg ?sec=1645087307&t=6eac8b4b750f527afb48c5d852bb2272";
	String key = hashKeyForDisk(imageUrl);
        DiskLruCache.Snapshot: DiskLruCache.Snapshot: DiskLruCache. If the valueCount cache key is set to a heap of one, then there is a horizontal 0 object
	DiskLruCache.Snapshot snapShot = mDiskLruCache.get(key);
	if(snapShot ! =null) {
                // Obtain the input stream from the DiskLruCache.Snapshot object. If the valueCount cache key is set to a heap of one, then the horizontal here is 0
		InputStream is = snapShot.getInputStream(0);
                // Get the input stream, then the cached bitmap object comes, if it is a file, then use the output stream to writeBitmap bitmap = BitmapFactory.decodeStream(is); mImage.setImageBitmap(bitmap); }}catch (IOException e) {
	e.printStackTrace();

Copy the code

MDiskLruCache. Get (key

/**
 * Returns a snapshot of the entry named {@code key}, or null if it doesn't
 * exist is not currently readable. If a value is returned, it is moved to
 * the head of the LRU queue.
 */
public synchronized Snapshot get(String key) throws IOException {
    // Use pseudo code.//valueCount generates input streams from cache files according to the principle of cache keys.
    InputStream[] ins = new InputStream[valueCount];
    try {
        for (int i = 0; i < valueCount; i++) {
            ins[i] = newFileInputStream(entry.getCleanFile(i)); }}catch (FileNotFoundException e) {
        // a file must have been deleted manually!
        return null;
    }
    //journal file id ++
    redundantOpCount++;
    // Write a line in the journal file with a header of READ.
    journalWriter.append(READ + ' ' + key + '\n');
    if (journalRebuildRequired()) {
        executorService.submit(cleanupCallable);
    }

    return new Snapshot(key, entry.sequenceNumber, ins);
}
Copy the code


4.1 realization of *LRU algorithm

There’s a comment on the get() method. Let’s see what it translates into

/ * *

  • Returns a snapshot of the entry named {@code key}, or null if it doesn’t
  • exist is not currently readable. If a value is returned, it is moved to
  • the head of the LRU queue.

* /

Returns a snapshot of the entry named {@code key}, or null if it does not exist and is currently unreadable. If a value is returned, it is moved to the head of the LRU queue (tail for us Chinese).

Remember we cached trimToSize() when size > maxSize.

    private void trimToSize(a) throws IOException {
        while (size > maxSize) {
            final Map.Entry<String, Entry> toEvict = lruEntries.entrySet().iterator().next();
            // Start with the first one, the least recently used one, and remove it.remove(toEvict.getKey()); }}Copy the code

That’s when we ask ourselves a question. If the READ data is the first lruEntries, according to DiskLruCache, the cache is used and should not be deleted, but the trimToSize() method should be deleted. What’s going on?

LinkedHashMap is an array plus bidirectional linked list structure. LruCache mainly uses the orderliness of LinkedHashMap, which is verified by the following code:

// Use linkedHashMap in DiskLruCache.
Map<String, String> linkedHashMap = new LinkedHashMap<>(16.0.75 f.true);
linkedHashMap.put("name1"."josan1");
linkedHashMap.put("name2"."josan2");
linkedHashMap.put("name3"."josan3");
Log.e("Test verification"."Starting order:");
Set<Map.Entry<String, String>> set = linkedHashMap.entrySet();
Iterator<Map.Entry<String, String>> iterator = set.iterator();
while (iterator.hasNext()) {
    Map.Entry entry = iterator.next();
    String key = (String) entry.getKey();
    String value = (String) entry.getValue();
    Log.e("Test verification"."key:" + key + ",value:" + value);
}
Log.e("Test verification"."Cause an Entry with a key of name1 to be at the end of the table via get.");
linkedHashMap.get("name1");
Set<Map.Entry<String, String>> set2 = linkedHashMap.entrySet();
Iterator<Map.Entry<String, String>> iterator2 = set2.iterator();
while (iterator2.hasNext()) {
    Map.Entry entry = iterator2.next();
    String key = (String) entry.getKey();
    String value = (String) entry.getValue();
    Log.e("Test verification"."key:" + key + ",value:" + value);
}
Copy the code

Finally look at the print:

You can see that the value obtained by the get() method ends up at the end of the linkedHashMap, so in the trimToSize() method, unused caches are always removed. In our practical use, every time before downloading the image, we must first go to get(key), see if there is a value in the cache before downloading, before caching.

Remove the cache

Remove cached code

try {
	String imageUrl = "https://gimg2.baidu.com/image_search/src=http%3A%2F%2Fclubimg.club.vmall.com%2Fdata%2Fattachment%2Fforum%2F202107%2F18% 2 f174756ssxumfsxiagrlg5u.png&refer=http%3a%2f%2fclubimg.club.vmall.com & app = 2002 & size = f9999, 10000 & q = a80 & n = 0 & g = 0 n & FMT = jpeg ?sec=1645087307&t=6eac8b4b750f527afb48c5d852bb2272";  
	String key = hashKeyForDisk(imageUrl);  
	mDiskLruCache.remove(key);
} catch (IOException e) {
	e.printStackTrace();
}
Copy the code

If YOU look at the source code for Remove, it’s using pseudo code, much of it the same

public synchronized boolean remove(String key) throws IOException {
    ···
    Entry entry = lruEntries.get(key);
    if (entry == null|| entry.currentEditor ! =null) {
        return false;
    }

    for (int i = 0; i < valueCount; i++) {
        File file = entry.getCleanFile(i);
        if(! file.delete()) {throw new IOException("failed to delete " + file);
        }
        // After the cache is removed, the cache size is recalculated
        size -= entry.lengths[i];
        entry.lengths[i] = 0; }...//journal file to write the log with the header REMOVE
    journalWriter.append(REMOVE + ' ' + key + '\n');
    // Remove the cachelruEntries.remove(key); ...return true;
}
Copy the code

Vi. References

DiskLruCache is done here. If there is any deficiency, welcome to supplement. Life is hard, but I keep trying. ! Go stranger. Recently the author has also been scanning leetCode’s algorithm to search the public account with ideas: Android people are advancing. Because weak chicken will not put two-dimensional code.

  • Android DiskLruCache fully resolved, the best solution for hard disk caching

  • Android cache implementation analysis