The preface

In 2018, when talking about Volley, it can be said that it is too yong and too simple. For network library, OkHttp is the most used one now. I should have been in the sophomore year when I got in touch with Volley. Later I read the source code, but not long ago in the interview, I was asked a question about Volley library, that is, how to schedule the priority of requests in Volley, but I got stuck. At that time, most of the reading of the source code just stayed on the structure of its implementation process and project. For its specific features and its how to implement these features but not to know, compared to a function implementation process, the implementation details of the features is also cannot be ignored, not to say that is the essence of a library, at the same time for the network of the defects in the library there, by understanding the advantages and defects, we can better foster strengths and circumvent weaknesses, make full use of the library. In line with this principle, prepare for a review of the code you read earlier, with a tentative plan to remove one wheel a week.

The code will be analyzed with its implementation process, feature implementation, and its defects as the main entry point.

Volley basic use

final TextView mTextView = (TextView) findViewById(R.id.text); . // Instantiate the RequestQueue. RequestQueue queue = Volley.newRequestQueue(this); String url ="http://www.google.com";

// Request a string response from the provided URL.
StringRequest stringRequest = new StringRequest(Request.Method.GET, url,
            new Response.Listener<String>() {
    @Override
    public void onResponse(String response) {
        // Display the first 500 characters of the response string.
        mTextView.setText("Response is: "+ response. The substring (0500)); } }, new Response.ErrorListener() {
    @Override
    public void onErrorResponse(VolleyError error) {
        mTextView.setText("That didn't work!"); }}); // Add the request to the RequestQueue. queue.add(stringRequest);Copy the code
  • Create a RequestQueue.
  • Create Request. Volley provides String, JsonObject, and other types. Users can inherit Reqeust to implement their own defined return types.
  • Adds the request to the request queue.

After the above three steps, we have completed a network request. In the onResponse method of the registered listener, we can get the result of the successful request and the error message in the onErrorResponse method.

Volley implementation

  • Creation of the request queue
private static RequestQueue newRequestQueue(Context context, Network network) {
    File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
    RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
    queue.start();
    return queue;
}
Copy the code
  • The request is added to the queue
Public <T> Request<T> add(Request<T> Request) {// Request Request. SetRequestQueue (this); synchronized (mCurrentRequests) { mCurrentRequests.add(request); } request. SetSequence (getSequenceNumber()); request.addMarker("add-to-queue"); // If the request does not need caching, join the network request queue directlyif(! request.shouldCache()) { mNetworkQueue.add(request);returnrequest; } // Add (request) to McAchequeue.add (request);return request;
 }
Copy the code
  • Start the call queue
public void start() { stop(); MCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery); mCacheDispatcher.start(); // According to the number of thread pool Settings, create network request Dispatcher and startfor(int i = 0; i < mDispatchers.length; i++) { NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery); mDispatchers[i] = networkDispatcher; networkDispatcher.start(); }}Copy the code

This will first turn off the previous Dispatcher, and then re-create and start the Dispatcher

  • Create and enable CacheDispatcher
public void run() {
    mCache.initialize();
    while (trueFinal Request<? > request = mCacheQueue.take();if (request.isCanceled()) {
                request.finish("cache-discard-canceled");
                continue; Cache.entry Entry = mCache. Get (request.getcacheKey ());if (entry == null) {
                request.addMarker("cache-miss");
                // Cache miss; send off to the network dispatcher.
                if(! mWaitingRequestManager.maybeAddToWaitingRequests(request)) { mNetworkQueue.put(request); }continue; } // The cache is out of date, and the request is placed in the network request queueif (entry.isExpired()) {
                request.addMarker("cache-hit-expired"); request.setCacheEntry(entry); // Determine whether to join the waiting queue. If so, do not join the waiting queueif(! mWaitingRequestManager.maybeAddToWaitingRequests(request)) { mNetworkQueue.put(request); }continue; } //Cache is not expired, Cache hit, which wraps it as NetworkResponse Response<? > response = request.parseNetworkResponse( new NetworkResponse(entry.data, entry.responseHeaders)); request.addMarker("cache-hit-parsed"); If the cache does not need to be updated, the result is thrown back, otherwise check whether it is in the waiting request queue, if the request is not executed, otherwise throw backif(! entry.refreshNeeded()) { mDelivery.postResponse(request, response); }else {
                request.addMarker("cache-hit-refresh-needed");
                request.setCacheEntry(entry);
                response.intermediate = true;

                if(! mWaitingRequestManager.maybeAddToWaitingRequests(request)) { mDelivery.postResponse(request, response, newRunnable() {
                        @Override
                        public void run() { try { mNetworkQueue.put(request); } catch (InterruptedException e) { // Restore the interrupted status Thread.currentThread().interrupt(); }}}); }else{ mDelivery.postResponse(request, response); }}}}Copy the code

When a cache hit is missed, a network request needs to be made. This is managed through the WaitingRequestManager, which maintains a Map to hold the response request with the request CacheKey and value request. If the CacheKey is not included in the Map, it is added and its value is set to Null; if it is, it is added directly. The first Null value is set to prevent an asynchronous transfer of data to NetworkDispatcher on request return from being processed again in the cache request queue. This ensures that the same request can only be executed once by HttpStack. The success/failure callback for each request setting is used.

private synchronized boolean maybeAddToWaitingRequests(Request<? > request) { String cacheKey = request.getCacheKey(); // If the request queue contains the Cachekey, the network request has already been executedif (mWaitingRequests.containsKey(cacheKey)) {
        // There is already a request inflight. Queue up. List<Request<? >> stagedRequests = mWaitingRequests.get(cacheKey);if(stagedRequests == null) { stagedRequests = new ArrayList<Request<? > > (); } request.addMarker("waiting-for-response");
        stagedRequests.add(request);
        mWaitingRequests.put(cacheKey, stagedRequests);
        if (VolleyLog.DEBUG) {
            VolleyLog.d("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
        }
        return true;
    } else{// Set Value to null because its callback is executed in NetworkDispatcher mWaitingRequests. Put (cacheKey, null); request.setNetworkRequestCompleteListener(this);if (VolleyLog.DEBUG) {
        return false; }}Copy the code

Through setting setNetworkRequestCompleteListener for the Request, when the network Request completes its onResponseReceived function will be called back to.

public void onResponseReceived(Request<? > request, Response<? > response) {if (response.cacheEntry == null || response.cacheEntry.isExpired()) {
        onNoUsableResponseReceived(request);
        return; } String cacheKey = request.getCacheKey(); List<Request<? >> waitingRequests; synchronized (this) { waitingRequests = mWaitingRequests.remove(cacheKey); }if(waitingRequests ! = null) {// Pass the result of the waiting requestfor(Request<? > waiting : waitingRequests) { mCacheDispatcher.mDelivery.postResponse(waiting, response); }}}Copy the code

At this point, the callback for the queued request will be executed.

  • Create and enable NetworkDispatcher
public void run() {
    while (true) {// Request<? > request = mQueue.take(); // Determine if the request needs to be canceledif (request.isCanceled()) {
             request.finish("network-discard-cancelled");
             request.notifyListenerResponseNotUsable();
              continue; } NetworkResponse NetworkResponse = mnetwork.performRequest (request); Response<? > response = request.parseNetworkResponse(networkResponse); // Determine whether to add to the cacheif(request.shouldCache() && response.cacheEntry ! = null) { mCache.put(request.getCacheKey(), response.cacheEntry); request.addMarker("network-cache-written"); } // Deliver the response by Delivery. PostResponse (request, response); Request. NotifyListenerResponseReceived (response); }}Copy the code
  • The request is processed until a response is generated

Volley features and defects

features

  • Automatic scheduling of network requests, supporting multiple concurrent network requests
  • Transparent disk memory cache of response results
  • Request priority adjustment is supported
  • Cancelling requests is supported and apis are provided
  • Highly extendable
  • Asynchronous callback of network request results

defects

  • Not suitable for large network download requests

Next, according to the characteristics and defects of Volley, respectively, from the source analysis.

  1. Automatic scheduling of network requests, supporting multiple concurrent network requests

In the RequestQueue method, multiple NetworkDispatchers are opened, and each Dispatcher is a thread.

NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                mCache, mDelivery);
 mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
Copy the code

Network requests are stored in a priority blocking queue, from which each thread can retrieve the request, execute it, and return the response. Volley manages these threads internally without the developer’s concern.

  1. Transparent disk memory cache of response results

For Cache, Volley supports users to make their own Cache rules and implement their own Cache through the Cache interface. A default cache implementation called DiskBasedCache is also provided. How does it achieve a transparent disk memory response?

First of all, all the requests that are judged to be cached will be added to the cache queue, and then fetched from the cache. If there are any in the cache, they will be returned. If there are no or expired, they will be added to the network request queue.

In CacheDispatcher, we first call the Cache initialization method

mCache.initialize();
Copy the code

The Cache is then retrieved from the Cache’s GET method via the requested CacheKey. The internal implementation maintains a LinkedHashMap in memory with the requested CacheKey as the Key and CacheHeader as the value. For each request, the contents are stored in a disk file as a file. At initialization, the cache file directory is read and loaded into memory. Load data from disk based on the cache in memory.

  1. Priority implementation

Volley supports priority adjustment, scheduling requests to be added through a PriorityblockingQueue that compares their priorities based on the Obejct compare method it stores.

@Override
public int compareTo(Request<T> other) {
    Priority left = this.getPriority();
    Priority right = other.getPriority();

    // High-priority requests are "lesser" so they are sorted to the front.
    // Equal priorities are sorted by sequence number to provide FIFO ordering.
    return left == right ?
            this.mSequence - other.mSequence :
            right.ordinal() - left.ordinal();
}
Copy the code

The priority is set first based on the request, and then based on its Sequence value, which increases in chronological order.

  1. Arbitrary cancel network request

By setting the cacEL field for the request, determining whether the request was canceled while it was executing, and then executing it again.

  1. Highly extendable

Volley supports extensions to the Cache, HttpStack, and network request execution classes, including HttpClient and HttpUrlConnection. Supports customization of network request callbacks. Support network request resolution customization. Users can respond to pile expansion according to their own needs.

  1. Asynchronous callback of network request results
  • Create the scheduler
new ExecutorDelivery(new Handler(Looper.getMainLooper()));
Copy the code
public ExecutorDelivery(final Handler handler) {
    // Make an Executor that just wraps the handler.
    mResponsePoster = new Executor() {
        @Override
        public void execute(Runnable command) {
            handler.post(command); }}; }Copy the code
  • To transfer data
@Override public void postResponse(Request<? > request, Response<? > response, Runnable runnable) { request.markDelivered(); request.addMarker("post-response");
    mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
}
Copy the code

The scheduler holds the main thread’s Handler and passes our data transparently to the main thread via the Handler’s POST. In this way asynchronous callbacks are implemented.

  1. Not suitable for large data transfer

In the results returned by byte [] field, and then stored in memory, and then obtaining the corresponding coding way, turning it into a string, so that it will increase when the number of network request, return content increasing, has led to an increased amount of data in the memory, so the Volley is not suitable for large amount of data transmission, and implement small and frequent requests.

new NetworkResponse(statusCode, responseContents, responseHeaders, false,
                        SystemClock.elapsedRealtime() - requestStart);
Copy the code

Here the responseContents is an array of byte types.

conclusion

Volley source code implementation is relatively simple, the amount of code is not very large, easy to read and understand, so Volley as a New Year since the demolition of the first wheel. The process of removing the wheel can help us learn about the implementation of the function, but also can learn some design ideas from it. The next plan is to dismantle one wheel a week, Android or otherwise. If you have enough time, you may find some large code. If you have less time, you may find some simple wheels to dismantle. Split ideas for its basic use, the realization of principle combing, characteristics and defects analysis. The realization of a wheel is thoroughly understood through these three aspects.