Spring Cache defects

Spring Cache is an excellent caching component.

However, in the process of using Spring Cache, Xiaohei also encountered some pain points.

For example, there is a requirement to get user information in batches with multiple userids.

Plan 1

At this point, our code might look like this:

List<User> users = ids.stream().map(id -> {
    return getUserById(id);
})
.collect(Collectors.toList());

@Cacheable(key = "#p0", unless = "#result == null")
public User getUserById(Long id) {
	/ /...
}
Copy the code

The disadvantages of this method are:

Operate redis in the for loop. If the data hits the cache fine, if the cache misses, the database is accessed.

Scheme 2

Some of you might do this:

@Cacheable(key = "#ids.hash")
public Collection<User> getUsersByIds(Collection<Long> ids) {
	/ /...
}
Copy the code

The problem with this approach is:

The cache is a Hashcode based on a list of ids, and the cache hits only if the hashcode values of the list of ids are equal. Also, if one of the data in the list is modified, the entire list cache is cleared.

Such as:

The first request id list is 1,2,3,

The list of ids for the second request is 1,2,4

In this case, the previous and subsequent caches cannot be shared.

If the data with ID 1 changes, the cache for both requests will be cleared

What does Spring have to say

The Spring Issue:

Github.com/spring-proj…

Github.com/spring-proj…

Simple translation, specific content readers can refer to the relevant issue.

The translation:

Thank you for your report. The cache abstraction has no concept of this state, if you return a collection, that’s what you’re asking to be stored in the cache. There’s also nothing that forces you to keep the same item types for a given cache, so this assumption is not appropriate for such high-level abstractions.

My understanding is that for a high-level abstraction framework like Spring Cache, the Cache is method-based, and if the method returns a Collection, the entire Collection is what needs to be cached.

My solution

After a long struggle, Xiao Hei decided to build a wheel by himself.

So what do I want to achieve?

I hope that for the operation of batch fetching cache based on multiple keys, we can first search for a single key from the cache, and then load the data if it does not exist in the cache, and then put the data into the cache.

Nonsense not to say, directly on the source:

Github.com/shenjianeng…

A brief introduction to the overall idea:

  • The core interface

    • com.github.shenjianeng.easycache.core.Cache

    • com.github.shenjianeng.easycache.core.MultiCacheLoader

Cache interface

The Cache interface defines some common caching operations. Unlike most Cache frameworks, it supports bulk Cache fetching by key.

/** * return null */ if keys are not present in the cache
@NonNull
Map<K, V> getIfPresent(@NonNull Iterable<K> keys);


/** * get from cache according to keys, if not present in cache, call {@linkMultiCacheLoader#loadCache(java.util.collection)} loads the data and adds it to the cache */
@NonNull
Map<K, V> getOrLoadIfAbsent(@NonNull Iterable<K> keys);
Copy the code

MultiCacheLoader interface

@FunctionalInterface
public interface MultiCacheLoader<K.V> {

    @NonNull
    Map<K, V> loadCache(@NonNull Collection<K> keys);

    default V loadCache(K key) {
        Map<K, V> map = loadCache(Collections.singleton(key));
        if (CollectionUtils.isEmpty(map)) {
            return null;
        }
        returnmap.get(key); }}Copy the code

MultiCacheLoader is a functional interface. When the Cache#getOrLoadIfAbsent method is called, if the cache does not exist, the data is loaded with MultiCacheLoader and then added to the cache.

RedisCache

RedisCache is currently the only implementation of the Cache interface. Like its class name, this is a RedIS-based caching implementation.

First, let’s talk about the general implementation idea:

  1. Use the redis mget command to fetch the cache in batches. To ensure efficiency, obtain a maximum of 20 PCS in batches at a time.
  2. If any data is not in the cache, it determines whether it needs to be loaded automatically and, if so, through MultiCacheLoader
  3. Store the data in the cache. At the same time, a zset is maintained to store known cache keys, which are used to clear the cache.

Nonsense not to say, directly on the source code.

private Map<K, V> doGetOrLoadIfAbsent(Iterable<K> keys, boolean loadIfAbsent) {
    List<String> cacheKeyList = buildCacheKey(keys);
    List<List<String>> partitions = Lists.partition(cacheKeyList, MAX_BATCH_KEY_SIZE);

    List<V> valueList = Lists.newArrayListWithExpectedSize(cacheKeyList.size());

    for (List<String> partition : partitions) {
        // Get multiple keys. Values are returned in the order of the requested keys.
        List<V> values = (List<V>) redisTemplate.opsForValue().multiGet(partition);
        valueList.addAll(values);
    }

    List<K> keysList = Lists.newArrayList(keys);
    List<K> missedKeyList = Lists.newArrayList();

    Map<K, V> map = Maps.newHashMapWithExpectedSize(partitions.size());


    for (int i = 0; i < valueList.size(); i++) {
        V v = valueList.get(i);
        K k = keysList.get(i);
        if(v ! =null) {
            map.put(k, v);
        } else{ missedKeyList.add(k); }}if (loadIfAbsent) {
        Map<K, V> missValueMap = multiCacheLoader.loadCache(missedKeyList);

        put(missValueMap);

        map.putAll(missValueMap);
    }

    return map;
}
Copy the code

Cache clearing method implementation:

public void evictAll(a) {
    Set<Serializable> serializables = redisTemplate.opsForZSet().rangeByScore(knownKeysName, 0.0);

    if(! CollectionUtils.isEmpty(serializables)) { List<String> cacheKeys = Lists.newArrayListWithExpectedSize(serializables.size()); serializables.forEach(serializable -> {if (serializable instanceofString) { cacheKeys.add((String) serializable); }}); redisTemplate.delete(cacheKeys); redisTemplate.opsForZSet().remove(knownKeysName, cacheKeys); }}Copy the code

Say a few more words

More source details, if the reader is interested, you can read the source code: easy-cache

Welcome everyone to fork experience, or leave a comment in the comment section

Future plans:

  • Support for caching null values
  • A declarative cache that supports annotations