2.1 Volley Framework Structure
According to Volley’s framework structure, it can be seen in the previous chapter that all Request requests are handled according to this process, so I will analyze the source code from the Request process. Its framework is mainly divided into three parts: (1) Create Request in Main Thread and parse and display the return result of Request; (2)Cache threads process requests in the Cache. If the requested contents already exist in the Cache, they are fetched from the Cache and returned. (3)NetWork Thread: When the request cannot be found in the cache, data needs to be obtained from the access NetWork. There can be only one main Thread and one Cache Thread, but there can be many NetWork threads (4 by default) to solve the parallelism problem.
2.2 the RequestQueue Volley
RequestQueue Is the first object to be created in the Volley framework. It is created internally by calling static functions of the Volley class.
public static RequestQueue newRequestQueue(Context context, HttpStack stack, int maxDiskCacheBytes) { File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR) ... // Select HttpStack based on the current system version, if the version is greater than or equal to 9(Android) HttpURLConnection, so use a HurlStack; If less than 9, use HttpClient. //3 to create a NetWork, call its constructor and pass stack to communicate with the NetWork. // create a DiskBasedCache object and pass it to the RequestQueue as an argument. RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network); // create RequestQueue queue.start(); // Call RequestQueue start}Copy the code
Request queues serve as the memory for all requests in Volley and internally store requests created by the Set. Private Final Set> mCurrentRequests = new HashSet>(); Create a RequestQueue object by calling the constructor of the RequestQueue using Volley’s static function:
public RequestQueue(Cache cache, Network network, int threadPoolSize, ResponseDelivery delivery) { mCache = cache; // cache mNetwork = network; // mDispatchers = new NetworkDispatcher[threadPoolSize]; MDelivery = delivery; Public RequestQueue(Cache Cache, Network Network, int threadPoolSize) {this(Cache, Network, threadPoolSize, new ExecutorDelivery(new Handler(Looper.getMainLooper()))); } public RequestQueue(Cache cache, Network network) { this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE); // Default DEFAULT_NETWORK_THREAD_POOL_SIZE=4}Copy the code
Its main work includes: initializing Disk Cache path, executing NetWork request interface NetWork, NetWork request dispatcher and request result ResponseDelivery. In the above creation process, both CacheDispatcher and NetworkDispatcher inherit from Thread.
public class NetworkDispatcher extends Thread { ... } public CacheDispatcher extends Thread {... // omit code}Copy the code
ExecutorDelivery is the ResponseDelivery interface, and the ResponseDelivery constructor is a Handler, and the Handler constructor is lopper.getMainlooper (), so this is the main thread of the application Looper, that is, the handler is the main thread’s handler, and its job is to pass the result of the request (correct or wrong) to the main thread.
/** * Delivers responses and errors. */ public class ExecutorDelivery implements ResponseDelivery { /** Used for posting responses, typically to the main thread. */ private final Executor mResponsePoster; /** * Creates a new response delivery interface. * @param handler {@link Handler} to post responses on */ public ExecutorDelivery(final Handler handler) { // Make an Executor that just wraps the handler. mResponsePoster = new Executor() { @Override public void execute(Runnable command) { handler.post(command); }}; }... // omit code}Copy the code
After the RequestQueue object is created, the start method is called to start all dispatchers (CacheDispatcher and NetworkDispatcher) :
Public void start() {stop(); public void start() {stop(); MCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCacheDispatcher); mCache, mDelivery); mCacheDispatcher.start(); For (int I = 0; for (int I = 0; i < mDispatchers.length; i++) { NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery); mDispatchers[i] = networkDispatcher; networkDispatcher.start(); }}Copy the code
Stop (cache and network dispatchers)
public void stop() {
if (mCacheDispatcher != null) {
mCacheDispatcher.quit();
}
for (int i = 0; i < mDispatchers.length; i++) {
if (mDispatchers[i] != null) {
mDispatchers[i].quit();
}
}
}
Copy the code
In the start process, mCacheQueue and mNetworkQueue are defined as follows:
Private final PriorityBlockingQueue> mCacheQueue = new PriorityBlockingQueue>(); /** The queue of requests that are actually going out to the network. */ private final PriorityBlockingQueue> mNetworkQueue = new PriorityBlockingQueue>();Copy the code
As you can see, they are all priorityBlockingQueues that execute with priority provided in the Java Concurrent package. Obviously they should be used to hold incoming requests such as JsonRequest,ImageRequest and StringRequest. After analyzing how to start and stop a Request object, how to add it to the Request queue once it is created? RequestQueue provides an Add method to add the created Request to the Request queue. And determine whether the request is stored in the cache to classify.
Public Request add(Request Request) {// Set the queue where the Request resides to the current queue and add the Request to mCurrentRequests, indicating that the Request is in process. Synchronized request.setRequestQueue(this); synchronized (mCurrentRequests) { mCurrentRequests.add(request); } // The sequence number is set here to ensure that each request is processed in sequence. request.setSequence(getSequenceNumber()); request.addMarker("add-to-queue"); // If the request is not cached, it will be added to the mNetworkQueue to fetch data directly from the network. request.shouldCache()) { mNetworkQueue.add(request); return request; } // This means that the request can go to the cache first. synchronized (mWaitingRequests) { String cacheKey = request.getCacheKey(); / / key is a string that consists of Method + ":" + Url, is the default Url as cacheKey if (mWaitingRequests. Either containsKey (cacheKey)) {. // If the request already has an identical request (the same CacheKey) in the mWatingRequest, Queue> stagedRequests =? Queue> stagedRequests =? Queue> stagedRequests = mWaitingRequests.get(cacheKey); if (stagedRequests == null) { stagedRequests = new LinkedList>(); } stagedRequests.add(request); mWaitingRequests.put(cacheKey, stagedRequests); if (VolleyLog.DEBUG) { VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey); }} else {// If not in mWaitingRequest, add it to the collection, add it to the mCacheQueue queue, indicating that the cacheKey request is now being processed. null); mCacheQueue.add(request); } return request; }}Copy the code
When mCacheQueue or mNetworkQueue uses the add method to add the request, the running thread will receive the request, process the corresponding request, and finally send the result of processing to the main thread for update by mDelivery. After the Request is processed in the cache thread or network thread, each Request calls the corresponding Finish method,
void finish(final String tag) { if (mRequestQueue ! = null) { mRequestQueue.finish(this); onFinish(); }... // omit code}Copy the code
The next step is to call the RequestQueue’s Finish () method:
Synchronized (mCurrentRequests) {mCurrentRequests (Request); synchronized (mCurrentRequests) {mCurrentRequests (Request); } synchronized (mFinishedListeners) { for (RequestFinishedListener listener : mFinishedListeners) { listener.onRequestFinished(request); } //2-> if (request.shouldCache()) {synchronized (mWaitingRequests) {String cacheKey = request.getCacheKey(); Queue> waitingRequests = mWaitingRequests.remove(cacheKey); if (waitingRequests ! = null) { if (VolleyLog.DEBUG) { VolleyLog.v("Releasing %d waiting requests for cacheKey=%s.", waitingRequests.size(), cacheKey); } // Process all queued up requests. They won't be considered as in flight, but // that's not a problem as the cache has been primed by 'request'. mCacheQueue.addAll(waitingRequests); }}}}Copy the code
The second step is to determine whether the request is cached or not. 1) If it is, we are now throwing a large number of requests from the mWaitingQueue with the same CacheKey into the mCacheQueue. We don’t know if a request with the same CacheKey is in the cache; 2) If not, and it needs to fetch data from the network, wait until it has fetched data from the network, put it in the cache, and when it has finished and is already cached, we can ensure that subsequent requests with the same CacheKey can fetch data from the cache without needing to fetch data from the network. Finally, there are two methods provided in the RequestQueue for the user to cancel the request at any time under control:
Public void cancelAll(RequestFilter filter) {synchronized (mCurrentRequests) {for (Request Request: mCurrentRequests) { if (filter.apply(request)) { request.cancel(); }}}} /** * Public void cancelAll(Final Object Tag) {if (Tag == null) {throw new IllegalArgumentException("Cannot cancelAll with a null tag"); } cancelAll(new RequestFilter() { @Override public boolean apply(Request request) { return request.getTag() == tag; }}); }Copy the code
CancleAll () cancleAll () cancleAll () cancleAll () cancleAll () cancleAll () cancleAll () cancleAll () The core of the source code for the RequestQueue is now analyzed.
2.3 the Request of Volley
Requests provided by Volley include: StringRequest, JsonArrayRequest, JsonObjectRequest, ImageRequest and JsonRequest, of which JsonArrayRequest and JsonObjectRequest inherit from JsonReq Uest,StringRequest, ImageRequest, JsonRequest inherit from Request.Volley and you can also customize Request. Request is an abstract class. A number of methods are provided. The two abstract methods that subclasses need to implement are:
/** Subclasses must implement this method to pass the data parsed in response to the listener of the request, but not if parsing fails. /** Subclasses must implement this method, which will be called by a worker thread, to parse The network request to return a response and return an appropriate Response-type. or null in the case of an error */ abstract protected Response parseNetworkResponse(NetworkResponse response);Copy the code
The specific usage was described in the previous chapter. Both the request provided in Volley and our custom request are parsed through their respective parsing methods.
2.4 HttpStack
What does a network request look like? Specific network requests are implemented in HurlStack/HttpClientStack, remember to create the request queue in the Volley to determine the current version of the system, in order to request is access to different network framework:
If (stack == null) {if (build.version.sdk_int >= 9) {// if the VERSION is larger than 9(Android2.3), HurlStack is used, where HttpURLConnection is used for network requests stack = new HurlStack(); } else {//// if version less than 9(Android2.3) use HttpClientStack where HttpClient is used for network requests stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent)); NetWork NetWork = new BasicNetwork(stack);Copy the code
NetWork is an interface that contains only one method: Public NetworkResponse performRequest(Request Request) throws VolleyError; Its implementation class is BasicNetwork class. According to the system version, different network request frameworks are selected and passed into BasicNetwork as parameters.
public BasicNetwork(HttpStack httpStack, ByteArrayPool pool) {//... @Override Public NetworkResponse performRequest(Request Request) throws VolleyError {long requestStart = SystemClock.elapsedRealtime(); / /... Map headers = new HashMap(); addCacheHeaders(headers, request.getCacheEntry()); httpResponse = mHttpStack.performRequest(request, headers); Return the value of the network request (statusCode, responseContents,) returns the value of the network request (statusCode, responseContents,). responseHeaders, false, SystemClock.elapsedRealtime() - requestStart); }Copy the code
NetworkResponse is equivalent to a Bean, which encapsulates the data returned by network requests, various status codes, and request consumption time.
2.5 NetworkDispatcher
So network requests, what about threads that manage network requests? The NetworkDispatcher thread is used to manage a request if there is no cacheRequest in the CacheDispatcher.
@Override public void run() { Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); Request request; while (true) { long startTimeMs = SystemClock.elapsedRealtime(); // release previous request object to avoid leaking request object when mQueue is drained. request = null; Request = mqueue.take (); // Request = mqueue.take (); } catch (InterruptedException e) { // We may have been interrupted because it was time to quit. if (mQuit) { return; } continue; } try { request.addMarker("network-queue-take"); Request. IsCanceled () {request. Finish ("network-discard-cancelled"); // Check whether the request was canceled. continue; } addTrafficStatsTag(request); NetworkResponse NetworkResponse = mnetwork.performRequest (request); request.addMarker("network-http-complete"); // If the server returns an unmodified (304) response and the request has already sent the response object, no further action is required, Because no to mend the if (networkResponse notModified && request. HasHadResponseDelivered ()) {request. Finish (" the not - modified "); continue; } / / parse the Response data, returns the Response object Response Response = request. ParseNetworkResponse (networkResponse); request.addMarker("network-parse-complete"); // Use the request's shouldCache field to determine if a cache is needed, and if so, place it in the mCache. if (request.shouldCache() && response.cacheEntry ! = null) { mCache.put(request.getCacheKey(), response.cacheEntry); request.addMarker("network-cache-written"); } // Call mDelivery to pass the Response object back to the main thread for UI update. request.markDelivered(); mDelivery.postResponse(request, response); } catch (VolleyError volleyError) { volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs); parseAndDeliverNetworkError(request, volleyError); } catch (Exception e) { VolleyLog.e(e, "Unhandled exception %s", e.toString()); VolleyError volleyError = new VolleyError(e); volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs); // mDelivery is called when there is an error, and the error message is returned to the main thread. MDelivery. PostError (request, volleyError); }}}Copy the code
The NetworkDispatcher thread mainly does: 1) it calls the take() method of mQueue to get the request from the queue, if there is no request, it blocks and waits until a new request arrives in the queue. 2) Determine whether the request has been canceled. If it has been canceled, retrieve the request again. 3) Call the Network object to send the request to the Network and return a NetworkResponse object. 4) Call the pareseNetworkResonse method of the request and resolve the NetworkResponse object into the corresponding Response object. 5) Determine whether the request needs caching, and if so, place the cacheEntry object in its Response into the caching mCache. 6) Call mDelivery to transfer the Response object to the main thread for UI update. The main classes and processes in the Volley framework are described above. There are many minor details that are not described, but the source code is perfectly readable based on the processes Volley uses.