Caching, which I’m sure you’re all familiar with, is absolutely essential in a project. There are many caching tools on the market, such as Redis, Guava Cache, or EHcache. I’m sure you’re all familiar with these tools, so instead of talking about them today, let’s talk about implementing local caching. With reference to the above tools, to achieve a good local cache, Mr. Flathead thinks to start from the following three areas.

1. Selection of storage collection

To implement local caching, the storage container must be a data structure in the form of key/value, in Java, that is, we commonly used Map collection. There are HashMap, Hashtable and ConcurrentHashMap for us to choose. If we do not consider the data security in high concurrency, we can choose HashMap. If we consider the data security in high concurrency, We can choose either Hashtable or ConcurrentHashMap, but ConcurrentHashMap is preferred because ConcurrentHashMap performs better than Hashtable.

2. Expiration cache processing

Because caches are stored directly in memory, if we do not deal with expired caches, memory will be occupied by a large number of invalid caches, which is not what we want, so we need to clean up these invalid caches. The cache expiration process can be implemented by referring to the Redis strategy, which adopts periodic deletion + lazy elimination strategy.

Periodic Deletion policy

The periodic deletion strategy detects expired caches at regular intervals and deletes them. This strategy has the advantage of ensuring that expired caches are deleted. At the same time, there are also disadvantages. Expired caches may not be deleted in time, which is related to the timing frequency we set. Another disadvantage is that if there is a lot of cached data, each check will bring a lot of pressure to CUP.

Lazy elimination strategy

The lazy elimination strategy is to determine whether the cache has expired before using it. If it has expired, delete it and return null. The advantage of this strategy is that the expiration is determined only during the search, which has a bad impact on CUP. At the same time, this strategy has a fatal disadvantage. When you store a large number of caches, which are unused and expire, they will become invalid caches. These invalid caches will take up a large amount of memory space and eventually cause the server to run out of memory.

We briefly took a look at two Redis expiration cache handling strategies, each of which has its own advantages and disadvantages. Therefore, we can combine the two strategies in the process of use, and the combined effect is very ideal.

3. Cache elimination strategy

Cache flushing should be distinguished from expired cache processing, which is when we reach the number of caches we specify, after all, our memory is not infinite. If we need to continue to add caches, we will need to flush some of the existing caches according to a certain strategy to make room for the new cache. Here are some common cache flushing strategies.

First in, first out

If the cache space is insufficient, the data that enters the cache first is cleared to make room for new data. This policy mainly compares the creation time of cache elements. In scenarios that have high requirements on data effectiveness, you can select this policy to ensure the availability of the latest data.

Minimum use policy

Based on the number of times an element has been used, the less frequently used element is cleared to free up space, regardless of whether it is expired or not. This policy mainly compares the hitCount (hitCount) of elements. You can select this policy to ensure the validity of high-frequency data.

Least recently used policy

The element with the most recent timestamp is cleared to free up space based on the last time the element was used, whether expired or not. This policy compares the time when the cache was last used by a GET. In hotspot data scenarios, ensure hotspot data availability.

Random elimination strategy

Randomly discarding a cache, whether expired or not, is a good strategy to consider if there are no requirements for cached data.

Non-elimination strategy

When the cache reaches the specified value, no caches are eliminated, and no new caches can be added until the cache is eliminated.

Here are three things to consider when implementing a local cache:

Implementing local caching

In this Demo, we use ConcurrentHashMap as the storage set so that we can keep the cache safe even in high concurrency situations. I’m only using the timed delete strategy here, not the timed delete + lazy out strategy. You can try both strategies on your own. In terms of cache flushing, I have adopted a least-use policy here. Ok, technical selection is known, let’s take a look at the code implementation.

Cache object class

public class Cache implements Comparable<Cache>{
    / / key
    private Object key;
    / / the cache value
    private Object value;
    // Last access time
    private long accessTime;
    // Create time
    private long writeTime;
    // Survival time
    private long expireTime;
    // Number of hits
    privateInteger hitCount; . getter/setter()...Copy the code

Add the cache

/** * Add cache **@param key
 * @param value
 */
public void put(K key, V value,long expire) {
    checkNotNull(key);
    checkNotNull(value);
    // Update the cache when it exists
    if (concurrentHashMap.containsKey(key)){
        Cache cache = concurrentHashMap.get(key);
        cache.setHitCount(cache.getHitCount()+1);
        cache.setWriteTime(System.currentTimeMillis());
        cache.setAccessTime(System.currentTimeMillis());
        cache.setExpireTime(expire);
        cache.setValue(value);
        return;
    }
    // The maximum cache has been reached
    if (isFull()) {
        Object kickedKey = getKickedKey();
        if(kickedKey ! =null) {// Remove the least used cache
            concurrentHashMap.remove(kickedKey);
        }else {
            return;
        }
    }
    Cache cache = new Cache();
    cache.setKey(key);
    cache.setValue(value);
    cache.setWriteTime(System.currentTimeMillis());
    cache.setAccessTime(System.currentTimeMillis());
    cache.setHitCount(1);
    cache.setExpireTime(expire);
    concurrentHashMap.put(key, cache);
}
Copy the code

Access to the cache

/** * get cache **@param key
 * @return* /
public Object get(K key) {
    checkNotNull(key);
    if (concurrentHashMap.isEmpty()) return null;
    if(! concurrentHashMap.containsKey(key))return null;
    Cache cache = concurrentHashMap.get(key);
    if (cache == null) return null;
    cache.setHitCount(cache.getHitCount()+1);
    cache.setAccessTime(System.currentTimeMillis());
    return cache.getValue();
}
Copy the code

Get the least used cache

    /** * get the least used cache *@return* /
    private Object getKickedKey(a) {
        Cache min = Collections.min(concurrentHashMap.values());
        return min.getKey();
    }
Copy the code

Expired cache detection method

/** * handle expired cache */
class TimeoutTimerThread implements Runnable {
    public void run(a) {
        while (true) {
            try {
                TimeUnit.SECONDS.sleep(60);
                expireCache();
            } catch(Exception e) { e.printStackTrace(); }}}/** * The cache is invalid **@throws Exception
     */
    private void expireCache(a) throws Exception {
        System.out.println("Detect cache expiration cache");
        for (Object key : concurrentHashMap.keySet()) {
            Cache cache = concurrentHashMap.get(key);
            long timoutTime = TimeUnit.NANOSECONDS.toSeconds(System.nanoTime()
                    - cache.getWriteTime());
            if (cache.getExpireTime() > timoutTime) {
                continue;
            }
            System.out.println("Clear expired cache:" + key);
            // Clear the expired cacheconcurrentHashMap.remove(key); }}}Copy the code

Above is the main code, I have uploaded the complete code to GitHub

In this paper, from the design point of view of implementing local Cache, we briefly discussed the implementation of local Cache needs to pay attention to, in fact, these are also the core technology of Cache, whether Redis, Guava Cache or EHcache or other Cache tools, they are similar to the implementation principle of our local Cache. As long as we understand the implementation of local caching, I believe it will be easier to learn these caching tools.

Article insufficient place, hope everybody gives directions a lot, common study, common progress

The last

Play a small advertisement, welcome to scan the code to pay attention to the wechat public number: “The technical blog of the flathead brother”, progress together.