Recently encountered a pen test, the topic is probably as follows:

Implement a cacheRequest method to ensure that when the current Ajax request is for the same resource, only one request is actually made in the real network layer (assuming that the Request method already exists to encapsulate the Ajax request)

Set Http headercache-control and expire to a larger size, and you’ll find the browser cache. However, it is stated later that it provides a built-in request method and only initiates Ajax once. That is probably to let the author solve the cache problem in code at the business level.

Next, let’s set aside the actual scene value and think about how to achieve it.

Generally, we can get the following ideas very simply:

  • Using closures or modular designs, reference oneMap, stores the corresponding cached data.
  • The cache is checked on each request, and cached data is returned on each request, and the request is initiated on none.
  • After the request succeeds, the cached data is saved and returned. If the request fails, the cached data is not cached.

Then we typically write the following code:

// Build a Map to cache data
const dict = new Map(a)// We simply use the URL as a cacheKey
const cacheRquest = (url) = > {
  if (dict.has(url)) {
    return Promise.resolve(dict.get(url))
  } else {
    // No cache. Initiate a real request and write it to the cache after success
    return request(url).then(res= > {
      dict.set(url, res)
      return res
    }).catch(err= > Promise.reject(err))
  }
}
Copy the code

Writing here, you think this article is so easy to end? ~ ~

Of course not. I think there’s another one:

There is a small probability edge case where two or more requests for the same resource are concurrent and the first request is inpendingIn fact, the following request will still be initiated.

Therefore, we redesigned the following logic:

  1. When the first request is inpendingWhen, set a state value to lock, followed by concurrentcacheRequestIdentify thepending, the request is not initiated, encapsulated as an asynchronous operation, and stuffed into a queue.
  2. When the request responds, fetch the queue asynchronouslyresolveBroadcast the response data to each asynchronous operation.
  3. When a request error occurs, the same is true: broadcast an error message to each asynchronous operation.
  4. Asynchrony aftercacheRequestThe response data of SUCCESS is normally retrieved.

At this point, the first successful return of a concurrent request is immediately followed by an Ajax request.

First, we define the schema for our cache data, called cacheInfo, and store it in our Map

{
  status: 'PENDING'.// ['PENDING', 'SUCCESS', 'FAIL']
  response: {},      // Response data
  resolves: [],      // Successful asynchronous queue
  rejects: []        // Failed asynchronous queue
}
Copy the code

The main body of the function let’s comb the logic of the trunk:

  • Extra plus one moreoption, parameter can be passed to a custom cacheKey
  • Genuinely requestedhandleRequestThe logic is wrapped separately, and since it is used in more than one place, we will implement it separately.
const dict = new Map(a)const cacheRequest = function (target, option = {}) {
  const cacheKey = option.cacheKey || target

  const cacheInfo = dict.get(cacheKey)
  // When there is no cache, a real request is made and returned
  if(! cacheInfo) {return handleRequest(target, cacheKey)
  }

  const status = cacheInfo.status
  // Success data has been cached
  if (status === 'SUCCESS') {
    return Promise.resolve(cacheInfo.response)
  }
  // While the cache is PENDING, encapsulate a single asynchronous operation and queue it
  if (status === 'PENDING') {
    return new Promise((resolve, reject) = > {
      cacheInfo.resolves.push(resolve)
      cacheInfo.rejects.push(reject)
    })
  }
  // If the cached request fails, re-initiate the real request
  return handleRequest(target, cacheKey)
}
Copy the code

Next, there’s the handleRequest that makes the actual request, which encapsulates the overwriting of status and writing to cacheInfo. Two common functions are removed: setCache for writing to the cache and notify for broadcasting asynchronous operations.

The first is setCache, where the logic is very simple: shallow merge of the original cacheInfo and write

// ... dict = new Map()

const setCache = (cacheKey, info) = >{ dict.set(cacheKey, { ... (dict.get(cacheKey) || {}), ... info }) }Copy the code

Next comes handleRequest: overwrites the status lock, initiates a real request, and broadcasts a successful and failed response

const handleRequest = (url, cacheKey) = > {
  setCache(cacheKey, { 
    status: 'PENDING'.resolves: [].rejects: []})const ret = request(url)

  return ret.then(res= > {
    // Return success, flush the cache, and broadcast the concurrent queue
    setCache(cacheKey, {
      status: 'SUCCESS'.response: res
    })
    notify(cacheKey, res)
    return Promise.resolve(res)
  }).catch(err= > {
    // Return failure, refresh the cache, broadcast an error message
    setCache(cacheKey, { status: 'FAIL' })
    notify(cacheKey, err)
    return Promise.reject(err)
  })
}
Copy the code

Finally, an implementation of the notify broadcast function takes out the queues, broadcasts them one by one, and then emptying them

// ... dict = new Map()

const notify = (cacheKey, value) = > {
  const info = dict.get(cacheKey)

  let queue = []

  if (info.status === 'SUCCESS') {
    queue = info.resolves
  } else if (info.status === 'FAIL') {
    queue = info.rejects
  }

  while(queue.length) {
    const cb = queue.shift()
    cb(value)
  }

  setCache(cacheKey, { resolves: [].rejects: []})}Copy the code

Next, there is the intense and exciting test section, the test code in the form of screenshots

  • The server uses Express as a simple setup to construct a delay of two seconds to test the concurrency of the interface
  • Client request uses AXIOS instead to construct concurrent requests and individual requests.

Server code:

Client code:

Effect preview:

Expansion and Summary

  1. The test and function source code has been placed in a personal Github repository
  2. While some may feel that the author is overreaching, there are many scenarios that must be considered in order to implement a library.
  3. Expire cache expiration, request customization, etc.