Analysis of each implementation class of Cache

1. PerpetualCache (Default true cached class)

As mentioned above, most implementations of caching are hashMaps, and this is no exception. Operations on caching are performed by calling HashMap methods. The following notation shows several common methods for caching, put, remove, and get. Note the constructor, which has a String constructor. There’s nothing to say about this cache.

Note the hashcode and equals methods.

The hashcode method hashes the ID because the nameSpace is unique and the cache scope is nameSpace.

This reminds me of something to note when overriding equals:

1 Reflexivity: the return value of x.equals(X) must be true for any reference value X. 2 Symmetry: the return value of x.equals(y) must be true for any reference value X and y only if the return value of y.equals(X) is true. 3 Transitivity: if x.quals (y)=true, y.quals (z)=true, then x.quals (z)=true 4 consistency: If there is no change in the objects involved in the comparison, there should be no change in the result of the comparison. 5 Non-null: Any non-empty reference values X, x. als(null) must return false

/** Permanent cache *@author Clinton Begin
 */
public class PerpetualCache implements Cache {

  private final String id;

  private final Map<Object, Object> cache = new HashMap<>();

  public PerpetualCache(String id) {
    this.id = id;
  }
  / /... There's ellipsis in the middle
  @Override
  public void putObject(Object key, Object value) {
    cache.put(key, value);
  }
  @Override
  public Object getObject(Object key) {
    return cache.get(key);
  }
  @Override
  public Object removeObject(Object key) {
    return cache.remove(key);
  }

  @Override
  public void clear(a) {
    cache.clear();
  }

  @Override
  public boolean equals(Object o) {
    if (getId() == null) {
      throw new CacheException("Cache instances require an ID.");
    }
    if (this == o) {
      return true;
    }
    if(! (oinstanceof Cache)) {
      return false;
    }

    Cache otherCache = (Cache) o;
    return getId().equals(otherCache.getId());
  }

  @Override
  public int hashCode(a) {
    if (getId() == null) {
      throw new CacheException("Cache instances require an ID.");
    }
    returngetId().hashCode(); }}Copy the code

2. LruCache (corresponding label is LRU)

The default size of the LinkHashMap is 1024, and the default size of the LinkHashMap is 1204. When storing the value of the LinkHashMap, it will also store a copy of the value. It checks to see if any elements need to be removed, and if so, removes them from the delegate. And when you get the element, you get it from the LinkHashMap, and you get it from the delegate, and you get it from the LinkHashMap to keep it hot. (In LinkHashMap, a get element moves the current element to the end of the queue, and over time the first element is the oldest element.)

/**
 * Lru (least recently used) cache decorator.
 *
 * @author Clinton Begin
 */
public class LruCache implements Cache {
  
  // Propped object, generally PerpetualCache
  private final Cache delegate;
  
  //keyMap is the key to implementing LRU
  private Map<Object, Object> keyMap;
  // The oldest key
  private Object eldestKey;

  
   This constructor takes a Cache, not a String, that wraps the Cache
  public LruCache(Cache delegate) {
    this.delegate = delegate;
    // The default size is 1024
    setSize(1024);
  }
  // Get the id, which is a NameSpace
  @Override
  public String getId(a) {
    return delegate.getId();
  }
 // Delegate the operation to the delegate
  @Override
  public int getSize(a) {
    return delegate.getSize();
  }
  The LRU algorithm is implemented using LinkedHashMap. Note that the last parameter of the constructor, evict, is true
  // Implementing LRU with LinkHashMap was introduced earlier.
  public void setSize(final int size) {
    keyMap = new LinkedHashMap<Object, Object>(size, .75F.true) {
      private static final long serialVersionUID = 4267176411845948333L;
      // This method returns true, and the header is removed.
      @Override
      protected boolean removeEldestEntry(Map.Entry<Object, Object> eldest) {
         // Whether the limit is exceeded
        boolean tooBig = size() > size;
        if (tooBig) {
          John is the head node of LinkHahsMap
          eldestKey = eldest.getKey();
        }
        returntooBig; }}; }// Put it in cycleKeyList. Note that there are also operations in cycleKeyList
  @Override
  public void putObject(Object key, Object value) {
    delegate.putObject(key, value);
    cycleKeyList(key);
  }

  @Override
  public Object getObject(Object key) {
     // Why do we need to get, but no assignment,
     // Because get keeps the data fresh. LinkHashMap implements LRU
    keyMap.get(key); 
    return delegate.getObject(key);
  }

  @Override
  public Object removeObject(Object key) {
    return delegate.removeObject(key);
  }

  @Override
  public void clear(a) {
    delegate.clear();
    keyMap.clear();
  }
 // Look at this method, which is called on putObject and basically removes the oldest key from the delegate.
  private void cycleKeyList(Object key) {
    keyMap.put(key, key);
    if(eldestKey ! =null) {
      delegate.removeObject(eldestKey);
      eldestKey = null; }}}Copy the code

3. FifoCache

The FifoCache implementation is relatively simple, using queues to implement FIFO. The default limit is 1024, and only put operations queue.

/**
 * FIFO (first in, first out) cache decorator.
 *
 * @author Clinton Begin
 */
public class FifoCache implements Cache {

  private final Cache delegate;
  // Use queues to implement FIFO
  private final Deque<Object> keyList;
  private int size;

  public FifoCache(Cache delegate) {
    this.delegate = delegate;
    this.keyList = new LinkedList<>();
    this.size = 1024;
  }

  @Override
  public String getId(a) {
    return delegate.getId();
  }

  @Override
  public int getSize(a) {
    return delegate.getSize();
  }

  public void setSize(int size) {
    this.size = size;
  }

  @Override
  public void putObject(Object key, Object value) {
    cycleKeyList(key);
    delegate.putObject(key, value);
  }

  @Override
  public Object getObject(Object key) {
    return delegate.getObject(key);
  }

  @Override
  public Object removeObject(Object key) {
    return delegate.removeObject(key);
  }

  @Override
  public void clear(a) {
    delegate.clear();
    keyList.clear();
  }
 If the size of the queue exceeds the specified size, fetch the head node from the queue and remove it.
  private void cycleKeyList(Object key) {
    keyList.addLast(key);
    if(keyList.size() > size) { Object oldestKey = keyList.removeFirst(); delegate.removeObject(oldestKey); }}}Copy the code

4. LoggingCache

This implementation is pretty simple, just log on getObject. Calculate ratio. There’s nothing to say

/ * * *@author Clinton Begin
 */
public class LoggingCache implements Cache {

  private final Log log;
  private final Cache delegate;
  protected int requests = 0;
  protected int hits = 0;

  public LoggingCache(Cache delegate) {
    this.delegate = delegate;
    this.log = LogFactory.getLog(getId());
  }

  @Override
  public String getId(a) {
    return delegate.getId();
  }

  @Override
  public int getSize(a) {
    return delegate.getSize();
  }

  @Override
  public void putObject(Object key, Object object) {
    delegate.putObject(key, object);
  }

  @Override
  public Object getObject(Object key) {
    requests++;
    final Object value = delegate.getObject(key);
    if(value ! =null) {
      hits++;
    }
    if (log.isDebugEnabled()) {
      log.debug("Cache Hit Ratio [" + getId() + "]." + getHitRatio());
    }
    return value;
  }

  @Override
  public Object removeObject(Object key) {
    return delegate.removeObject(key);
  }

  @Override
  public void clear(a) {
    delegate.clear();
  }

  @Override
  public int hashCode(a) {
    return delegate.hashCode();
  }

  @Override
  public boolean equals(Object obj) {
    return delegate.equals(obj);
  }

  private double getHitRatio(a) {
    return (double) hits / (double) requests; }}Copy the code

5. SoftCache

The Key and the value of packaging for SoftEntry, stored in a delegate, and its hardLinksToAvoidGarbageCollection (strong reference collection), And a queueOfGarbageCollectedEntries (reference queue), at the time of the put and remove, before invoking the delegate, will get the value from a reference queue first, if we can get the value, was removed from the delegate. Will be obtained from the delegate when the get, if this value is, and not be GC, add strong references to his (add into hardLinksToAvoidGarbageCollection), And if the strong reference set is greater than numberOfHardLinks (the default is 256), the end-of-queue element is removed.

/**
 * Soft Reference cache decorator
 * Thanks to Dr. Heinz Kabutz for his guidance here.
 *
 * @author Clinton Begin
 */
public class SoftCache implements Cache {
  // queue, holding a strong reference
  private final Deque<Object> hardLinksToAvoidGarbageCollection;
  // The collection that soft references are placed after being recycled
  private final ReferenceQueue<Object> queueOfGarbageCollectedEntries;
  private final Cache delegate;
  // How many strong references are there? Default is 256
  private int numberOfHardLinks;

  public SoftCache(Cache delegate) {
    this.delegate = delegate;
    this.numberOfHardLinks = 256;
    this.hardLinksToAvoidGarbageCollection = new LinkedList<>();
    this.queueOfGarbageCollectedEntries = new ReferenceQueue<>();
  }

  @Override
  public String getId(a) {
    return delegate.getId();
  }
  / / removeGarbageCollectedItems method is?
  @Override
  public int getSize(a) {
    removeGarbageCollectedItems();
    return delegate.getSize();
  }

  public void setSize(int size) {
    this.numberOfHardLinks = size;
  }

  @Override
  public void putObject(Object key, Object value) {
    removeGarbageCollectedItems();
    // Wrap key and value as SoftEntry values.
    delegate.putObject(key, new SoftEntry(key, value, queueOfGarbageCollectedEntries));
  }
 //
  @Override
  public Object getObject(Object key) {
    Object result = null;
    @SuppressWarnings("unchecked") // assumed delegate cache is totally managed by this cache
    // Get an element from the delegate, SoftReference,
    SoftReference<Object> softReference = (SoftReference<Object>) delegate.getObject(key);
    if(softReference ! =null) {
      result = softReference.get();
      if (result == null) {
        delegate.removeObject(key);
      } else {
      / / if not null, the current object has not been recycled, therefore, is added to the hardLinksToAvoidGarbageCollection, increase the strong reference
        // See #586 (and #335) modifications need more than a read lock
        synchronized (hardLinksToAvoidGarbageCollection) {
          hardLinksToAvoidGarbageCollection.addFirst(result);
          / / if greater than numberOfHardLinks, it removes hardLinksToAvoidGarbageCollection tail elements inside
          if(hardLinksToAvoidGarbageCollection.size() > numberOfHardLinks) { hardLinksToAvoidGarbageCollection.removeLast(); }}}}return result;
  }

  @Override
  public Object removeObject(Object key) {
    removeGarbageCollectedItems();
    return delegate.removeObject(key);
  }

  @Override
  public void clear(a) {
    synchronized (hardLinksToAvoidGarbageCollection) {
      hardLinksToAvoidGarbageCollection.clear();
    }
    removeGarbageCollectedItems();
    delegate.clear();
  }
  / / see the logic here, a lot of calls this method, an element out of the team from the queueOfGarbageCollectedEntries queue, if you have, that this element has been the GC to recycle by GC, then you need to remove the element of the delegate, problem? HardLinksToAvoidGarbageCollection inside do you want to remove?
  / / don't need to, because from queueOfGarbageCollectedEntries inside out, shows that he is a garbage, there is no strong references.
  private void removeGarbageCollectedItems(a) {
    SoftEntry sv;
    while((sv = (SoftEntry) queueOfGarbageCollectedEntries.poll()) ! =null) { delegate.removeObject(sv.key); }}private static class SoftEntry extends SoftReference<Object> {
    private final Object key;

    SoftEntry(Object key, Object value, ReferenceQueue<Object> garbageCollectionQueue) {
      super(value, garbageCollectionQueue);
      this.key = key; }}}Copy the code

6. SynchronizedCache

This is very simple, just add the lock

/ * * *@author Clinton Begin
 */
public class SynchronizedCache implements Cache {

  private final Cache delegate;

  public SynchronizedCache(Cache delegate) {
    this.delegate = delegate;
  }

  @Override
  public String getId(a) {
    return delegate.getId();
  }

  @Override
  public synchronized int getSize(a) {
    return delegate.getSize();
  }

  @Override
  public synchronized void putObject(Object key, Object object) {
    delegate.putObject(key, object);
  }

  @Override
  public synchronized Object getObject(Object key) {
    return delegate.getObject(key);
  }

  @Override
  public synchronized Object removeObject(Object key) {
    return delegate.removeObject(key);
  }

  @Override
  public synchronized void clear(a) {
    delegate.clear();
  }

  @Override
  public int hashCode(a) {
    return delegate.hashCode();
  }

  @Override
  public boolean equals(Object obj) {
    returndelegate.equals(obj); }}Copy the code

7. ScheduledCache

Two properties, clearInterval (cleanup time) and lastClear (cleanup time). In remove, get, put, will first judge, judge the time, the current time – last time is greater than the interval to clear, if so, directly clear. And assigns the current time to lastClear.

The problem?

  1. What’s wrong with this cleanup?

Yes, there is a problem with this, because it binds the cleanup triggered operation to the operation, so if there is no operation, the cleanup time is meaningless, and there is no cleanup operation.

/ * * *@author Clinton Begin
 */
public class ScheduledCache implements Cache {

  private final Cache delegate;
  // Clean up the interval
  protected long clearInterval;
  // The time of the last cleanup
  protected long lastClear;

  public ScheduledCache(Cache delegate) {
    this.delegate = delegate;
    this.clearInterval = TimeUnit.HOURS.toMillis(1);
    this.lastClear = System.currentTimeMillis();
  }

  public void setClearInterval(long clearInterval) {
    this.clearInterval = clearInterval;
  }

  @Override
  public String getId(a) {
    return delegate.getId();
  }

  @Override
  public int getSize(a) {
    clearWhenStale();
    return delegate.getSize();
  }
   // 
  @Override
  public void putObject(Object key, Object object) {
    clearWhenStale();
    delegate.putObject(key, object);
  }

  @Override
  public Object getObject(Object key) {
    return clearWhenStale() ? null : delegate.getObject(key);
  }

  @Override
  public Object removeObject(Object key) {
    clearWhenStale();
    return delegate.removeObject(key);
  }

  @Override
  public void clear(a) {
    lastClear = System.currentTimeMillis();
    delegate.clear();
  }

  @Override
  public int hashCode(a) {
    return delegate.hashCode();
  }

  @Override
  public boolean equals(Object obj) {
    return delegate.equals(obj);
  }
 // This logic is simple. Determine whether the current time - last time is longer than the interval needed to clear, if so, clear directly
  private boolean clearWhenStale(a) {
    if (System.currentTimeMillis() - lastClear > clearInterval) {
      clear();
      return true;
    }
    return false; }}Copy the code

8. WeakCache

This is similar to SoftCache, but I’m not going to write it here

9. SerializedCache

To use this cache, you must implement the Serializable interface, which serializes and deserializes on PUT and GET. Serialization is for deep copy. What is deep copy for? Security.

/ * * *@author Clinton Begin
 */
public class SerializedCache implements Cache {

  private final Cache delegate;

  public SerializedCache(Cache delegate) {
    this.delegate = delegate;
  }

  @Override
  public String getId(a) {
    return delegate.getId();
  }

  @Override
  public int getSize(a) {
    return delegate.getSize();
  }
  // The Serializable interface must be implemented.
  @Override
  public void putObject(Object key, Object object) {
    if (object == null || object instanceof Serializable) {
      delegate.putObject(key, serialize((Serializable) object));
    } else {
      throw new CacheException("SharedCache failed to make a copy of a non-serializable object: "+ object); }}@Override
  public Object getObject(Object key) {
    Object object = delegate.getObject(key);
     // deserialize
    return object == null ? null : deserialize((byte[]) object);
  }

  @Override
  public Object removeObject(Object key) {
    return delegate.removeObject(key);
  }

  @Override
  public void clear(a) {
    delegate.clear();
  }

  @Override
  public int hashCode(a) {
    return delegate.hashCode();
  }

  @Override
  public boolean equals(Object obj) {
    return delegate.equals(obj);
  }
  // Copy a copy directly using serialization
  // The serialization operation has nothing to say.
  // Deep copy,
  private byte[] serialize(Serializable value) {
    try (ByteArrayOutputStream bos = new ByteArrayOutputStream();
        ObjectOutputStream oos = new ObjectOutputStream(bos)) {
      oos.writeObject(value);
      oos.flush();
      return bos.toByteArray();
    } catch (Exception e) {
      throw new CacheException("Error serializing object. Cause: "+ e, e); }}// deserialize
  private Serializable deserialize(byte[] value) {
    SerialFilterChecker.check();
    Serializable result;
    try (ByteArrayInputStream bis = new ByteArrayInputStream(value);
        ObjectInputStream ois = new CustomObjectInputStream(bis)) {
      result = (Serializable) ois.readObject();
    } catch (Exception e) {
      throw new CacheException("Error deserializing object. Cause: " + e, e);
    }
    return result;
  }

  public static class CustomObjectInputStream extends ObjectInputStream {

    public CustomObjectInputStream(InputStream in) throws IOException {
      super(in);
    }

    @Override
    protectedClass<? > resolveClass(ObjectStreamClass desc)throws ClassNotFoundException {
      returnResources.classForName(desc.getName()); }}}Copy the code

10. BlockingCache

Maps (locks) with timeouts, CacheKeys, and countdownlatches. The lock is acquired during a GET. Before releasing the lock, the lock is acquired by attempting to add a value within the LOCKS. If the value is added successfully, the lock is acquired. Note that the get and PUT operations are in order, so we need to get first to put, otherwise we will get an error, just look at the code inside the put.

And put doesn’t need to get the lock. Why is that?

If the CacheKey is the same (Hash, equals), it will be overwritten directly. For locks, it will simply fetch by CacheKey, and it will get the same value, countDown continues, and there is no problem

For example, if a CacheKey is queried first, a CacheKey and CountDownLatch(1) are already stored in locks, and other threads find that the locks are locked and wait there. When the lock is Put, the lock is not added. Therefore, the lock can be Put directly. When the lock is Put, the lock can be released.

That’s where Put is visible to GET.

public class BlockingCache implements Cache {
  // The timeout period
  private long timeout;
  private final Cache delegate;
  // Key is CacheKye, value is CountDownLatch
  private final ConcurrentHashMap<Object, CountDownLatch> locks;

  public BlockingCache(Cache delegate) {
    this.delegate = delegate;
    this.locks = new ConcurrentHashMap<>();
  }

  @Override
  public String getId(a) {
    return delegate.getId();
  }

  @Override
  public int getSize(a) {
    return delegate.getSize();
  }

  @Override
  public void putObject(Object key, Object value) {
    try {
      delegate.putObject(key, value);
    } finally {
      // Release the lock. Note that the release is written in finally to ensure that the lock is fully releasedreleaseLock(key); }}@Override
  public Object getObject(Object key) {
    / / acquiring a lock
    acquireLock(key);
    Object value = delegate.getObject(key);
    if(value ! =null) {
      // Release the lock when it is acquired
      releaseLock(key);
    }
    return value;
  }

  @Override
  public Object removeObject(Object key) {
    // despite of its name, this method is called only to release locks
    releaseLock(key);
    return null;
  }

  @Override
  public void clear(a) {
    delegate.clear();
  }
 // Look at the lock operation
  private void acquireLock(Object key) {
    
    
    CountDownLatch newLatch = new CountDownLatch(1);
    
    // There must be a while loop, and there must be a CountDownLatch when the lock is released
    while (true) {
      //locks holds a CacheKey, if not, it is created, if it is returned
      // The CountDownLatch stands for 1, which ensures locks are thread-safe, so ConcurrentHashMap.
      // Because he moves the locking operation to ConcurrentHashMap, CountDownLatch only ensures thread order.
      CountDownLatch latch = locks.putIfAbsent(key, newLatch);
      
      // If not, no lock has been added to the Cachekey before it was created. Return new to indicate that the lock has been added to the Cachekey
      if (latch == null) {
        break;
      }
      try {
        // If the timeout is set, wait for timeout, otherwise wait forever.
        if (timeout > 0) {
          boolean acquired = latch.await(timeout, TimeUnit.MILLISECONDS);
          // If no lock is obtained after the timeout period, an error is reported.
          if(! acquired) {throw new CacheException(
                "Couldn't get a lock in " + timeout + " for the key " + key + " at the cache "+ delegate.getId()); }}else {
          // Wait,latch.await(); }}catch (InterruptedException e) {
        throw new CacheException("Got interrupted while trying to acquire lock for key "+ key, e); }}}// Get the CountDownLatch from the current latch
  private void releaseLock(Object key) {
    CountDownLatch latch = locks.remove(key);
    if (latch == null) {
      throw new IllegalStateException("Detected an attempt at releasing unacquired lock. This should never happen.");
    }
    // Release, this will cache the await above, while loop will repeat, lock again.
    latch.countDown();
  }

  public long getTimeout(a) {
    return timeout;
  }

  public void setTimeout(long timeout) {
    this.timeout = timeout; }}Copy the code

11. TransactionalCache

Put is not placed directly in the delegate, but in an intermediate map (entriesToAddOnCommit), and also stores keys that do not hit the cache (entriesMissedInCache).

EntriesToAddOnCommit is added to the delegate at commit time, and entriesMissedInCache is stored in the delegate as well, but the value is null. After a commit (entriesToAddOnCommit, and entriesMissedInCache), the entriesMissedInCache key will be removed from the delegate on rollback, And (clear (entriesToAddOnCommit, and entriesMissedInCache)

/**<p> The level 2 buffer to be added when calling COMMIT or rollback. </p> * The 2nd level cache transactional buffer. * <p> * This class holds all cache entries that are to be added to the 2nd level cache during a Session. * Entries are sent to the cache when commit is called or discarded if the Session is rolled back. * Blocking cache support has been added. Therefore any get() that returns a cache miss * will be followed by a put() so any lock associated with the key can be released. * *@author Clinton Begin
 * @author Eduardo Macarron
 */
public class TransactionalCache implements Cache {

  private static final Log log = LogFactory.getLog(TransactionalCache.class);

  private final Cache delegate;
  // Flag position
  private boolean clearOnCommit;
  // The entity to be added to the cache at commit time
  private final Map<Object, Object> entriesToAddOnCommit;
  // Set of keys that did not hit the cache
  private final Set<Object> entriesMissedInCache;

  public TransactionalCache(Cache delegate) {
    this.delegate = delegate;
    this.clearOnCommit = false;
    this.entriesToAddOnCommit = new HashMap<>();
    this.entriesMissedInCache = new HashSet<>();
  }

  @Override
  public String getId(a) {
    return delegate.getId();
  }

  @Override
  public int getSize(a) {
    return delegate.getSize();
  }
 // If there is no value in get, it will be placed in entriesMissedInCache
  @Override
  public Object getObject(Object key) {
    // issue #116
    Object object = delegate.getObject(key);
    if (object == null) {
      entriesMissedInCache.add(key);
    }
    // issue #146
    if (clearOnCommit) {
      return null;
    } else {
      returnobject; }}// Put will not call the delegate method directly, but will be placed first in entriesToAddOnCommit
  @Override
  public void putObject(Object key, Object object) {
    entriesToAddOnCommit.put(key, object);
  }

  @Override
  public Object removeObject(Object key) {
    return null;
  }

  @Override
  public void clear(a) {
    clearOnCommit = true;
    entriesToAddOnCommit.clear();
  }
 // For commit only, the delegate is cleared first, and entriesToAddOnCommit is added to the delegate
  public void commit(a) {
    if (clearOnCommit) {
      delegate.clear();
    }
    flushPendingEntries();
    reset();
  }

  public void rollback(a) {
    unlockMissedEntries();
    reset();
  }

  private void reset(a) {
    clearOnCommit = false;
    entriesToAddOnCommit.clear();
    entriesMissedInCache.clear();
  }

  private void flushPendingEntries(a) {
    // We can use PutAll here
    for (Map.Entry<Object, Object> entry : entriesToAddOnCommit.entrySet()) {
      delegate.putObject(entry.getKey(), entry.getValue());
    }
    // If the cache is not hit, just put a NULL
    for (Object entry : entriesMissedInCache) {
      if(! entriesToAddOnCommit.containsKey(entry)) { delegate.putObject(entry,null); }}}/ / remove
  private void unlockMissedEntries(a) {
    for (Object entry : entriesMissedInCache) {
      try {
        delegate.removeObject(entry);
      } catch (Exception e) {
        log.warn("Unexpected exception while notifying a rollback to the cache adapter. "
            + "Consider upgrading your cache adapter to the latest version. Cause: "+ e); }}}}Copy the code

The problem?

  1. Why null keys that do not have a cache hit should be placed at commit time?

    I don’t know. For cache penetration? If the data is not found in the database, a null is prevented in the cache, but the external judgment will determine if null is found in the database. So what does null mean? I don’t understand this operation. I really don’t understand this operation.

  2. Why don’t you remove the key from the middle map (entriesToAddOnCommit) from the delegate during rollBack?

    Because entriesToAddOnCommit is not going to be a delegate, why is entriesMissedInCache going to be a delegate? In this case, it will only be present at Commit time, but it will be removed at Commit time. I don’t understand this operation. I really don’t understand this operation.

Mybatis these implementation classes, with the help of these various aspects of the cache, can be set up layer by layer, is the function of the superposition, the use of decorator design pattern. In fact, it also feels like an agent. Just a little too many agents.

So much for the analysis of the Cache implementation classes. Please point out any inaccuracies. thank you