0 x1, introduction

Volley, like AsyncTask, is a relic. It was first announced at Google I/O in 2013 with the intention of making Android developers write less repetitive request code.

How do you say? HttpURLConnection or HttpClient are used in the early days of network requests. Direct use is very troublesome, such as a paragraph of request baidu code:

private void sendRequest(a) {
        // Open a thread to initiate a network request
        new Thread(new Runnable() {
            @Override
            public void run(a) {
                HttpURLConnection conn = null;
                BufferedReader reader = null;
                try {
                    // Get an HttpRULConnection instance
                    URL url = new URL("https://www.baidu.com");
                    conn = (HttpURLConnection) url.openConnection();
                    
                    // Set the request method and free customization
                    conn.setRequestMethod("GET");
                    conn.setConnectTimeout(8000);
                    conn.setReadTimeout(8000);
                    
                    // Get the response code and the returned input stream
                    int i = conn.getResponseCode();
                    InputStream in = conn.getInputStream();
                    
                    // Read the input stream
                    reader = new BufferedReader(new InputStreamReader(in));
                    StringBuilder response = new StringBuilder();
                    String line;
                    while((line = reader.readLine()) ! =null) {
                        response.append(line);
                    }
                    showResponse(response.toString());
                } catch (Exception e) {
                    e.printStackTrace();
                } finally {
                    if(reader ! =null) {
                        try {
                            reader.close();
                        } catch(IOException e) { e.printStackTrace(); }}if(connection ! =null) {
                        connection.disconnect();
                    }
                }
            }
        }).start();
    }

    // Request the result display
    private void showResponse( final String response) {
        runOnUiThread(new Runnable() {
            @Override
            public void run(a) {
                // Perform UI operationsxxx.setText(response); }}); }Copy the code

Each request is written with this long, smelly string, and the person who wrote it dies, and the person who read it dies, so it’s usually sealed.

It’s not easy to wrap well, considering concurrency, request cancellations, and so on. What Volley does is encapsulate, encapsulating the details of complex communications internally, allowing developers to complete network requests with minimal code.

The Github repository is still maintained by Google, and there is an API document called Volley Overview that describes some of its advantages:

Build. Gradle > build. Gradle > build.

Implementation 'com. Android. Volley: volley: 1.1.1'Copy the code

Then write a simple request example (Kotlin version of official example ~)

// Create a request queue
RequestQueue mQueue = Volley.newRequestQueue(context);

// Construct the request
StringRequest stringRequest = new StringRequest("http://www.baidu.com".new Response.Listener<String>() {
		@Override
		public void onResponse(String response) {
			Log.d("TAG", response); }},new Response.ErrorListener() {
		@Override
		public void onErrorResponse(VolleyError error) {
			Log.e("TAG", error.getMessage(), error); }});// Queue the request
mQueue.add(stringRequest);
Copy the code

The code is much more pleasing to the eye than the straightforward and crude way above, and the documentation also provides a flow chart for handling requests in Volley:

Just to give you a sense of what’s going on in Volley, the request also supports caching, so if you hit the cache you just read the data in the cache, you don’t have to go to the background.

In actual business development, OkHttp is all over the place, and Volley is largely extinct and not recommended for use in projects. However, its design ideas can be used for reference. Although the code is simple and complete, understanding it is also beneficial to the subsequent study of other frameworks. Therefore, this section spends hundreds of millions of time to disintegrate it.


0x2. Concurrent Design

As in the HttpUrlConnection example above, every time a request is made, a new thread is created, which is stupid:

  • There is no reuse thread, frequent creation and destruction of thread resulting in unnecessary overhead;
  • There is no limit on the maximum number of threads, which may cause excessive resource competition and low system utilization rate.

Therefore, when it comes to concurrent open source projects, the thread pool is basically dead, and a task queue in and out of the queue is locked. Sometimes, for decoupling, a scheduler will be removed to access the task queue in an infinite loop, and the task will be taken out and executed by the thread pool.

Volley is designed to support concurrency. There is no thread pool initialization code, but there are two thread implementation classes:

NetworkDispatcher inherits Thread, so an array of threads of capacity 4 is defined by default, along with CacheDispatcher. They all start in RequestQueue → start() :

Volley → newRequestQueue after start() :

These threads are created and started when the RequestQueue is instantiated.


Request scheduling Design

Call RequestQueue → add() to join the queue.

Use three sets and deduce various uses according to the corresponding annotations:

  • MCurrentRequests: HashSet → Collection of all requests being processed (including waiting and processing);
  • MNetworkQueue: PriorityBlockingQueue → Uncached request queue
  • MCacheQueue: PriorityBlockingQueue → Cache request queue;

Joining the queue is relatively simple, and then look at the specific scheduling process, it is not difficult to see that there are two types, cache (default) and no cache, first look at the former ~

â‘  Request not to cache

NetworkDispatcher - > run ()

Queue requests in an endless loop, and there are four results after processing the request:

  • Request cancelled → request.finish(“network-discard-cancelled”);
  • Request. Finish (“not-modified”);
  • → mDelivery. PostResponse (request, response);
  • → mDelivery. PostError (request, volleyError);

Request → Finish ()

It’s simple: remove requests from the collection, log, invoke the end-of-request callback, and then look at the last two, as well as ExecutorDelivery

Initialize the following code for mResponsePoster:

Well, it’s an executor that distributes tasks, and since subsequent operations are performed on the main thread, it uses a Handler.

And then let’s see what the task does ResponseDeliveryRunnable → run()

The flow of normal requests is well understood, so let’s take a look at the process of scheduling cached requests

â‘¡ Request to remove the cache

CacheDispatcher → Run () is the same as processRequest() :

Simply put: if there is a cache and the cache has not expired, the request is returned, otherwise the request is added to the request queue. Let’s take a look at the cache design


0x4 cache Design

First, caching applies to requests that do not update server-side data, so GET requests are typically cached and POST requests are rarely cached.

Second, caching is not necessary, but it provides two benefits:

  • Client/browser → reduce network delay and speed up page opening;
  • Background → reduce bandwidth consumption, reduce server pressure;

Before we dive into the implementation details of Volley caching, let’s try designing a cache ourselves, starting with a crude solution:

  • Key/Value set storage, URL as Key, cache (response result) as Value;
  • Before the execution of the request, first to the collection according to the URL check, check directly returned to the cache, can not find the execution of the request, after the request is saved;

Here’s the problem: the resources in the background can change, one moment it’s this, the next moment it’s that, and the client is still using local cache, and the result can be wrong!

  • Background cooperation, add the validity period to the resource, which can be the expiration time directly, or the resource production time + validity period. The client cache is bound to this period.
  • If the client makes a request and matches the cache, it checks whether the cache is within the validity period and returns the cache directly. If no, the request is executed.

The problem arises again: the background resource changes, and the client is unaware that the cache is still valid, so it does not request new data.

  • Background changes, active notification of the client is obviously not too good (to actively push to refresh the client can also);
  • Clients still have to take the initiative to request the background;
  • The background adds a TAG to the resource and throws it to the client for the first request and the client throws it to the background for the next request. The background determines whether the TAG is equal to its own calculation, indicating that the resource has changed and gives different feedback to the client. For example, if there is no change, only a status code of 304 is returned. If there is a change, a status code of 200 is returned plus a new response. Based on the status code, the client decides whether to read the cache or parse the new response. This also has the advantage that resources in the background may remain the same, even though they have passed their expiration date, but there is no need to refresh, so the client does not need to update the cache.

2333, this is part of the process to ensure HTTP cache consistency. The additional information mentioned above is placed in the message header:

HTTP caches are classified by a variety of rules based on whether a request needs to be redirected to the server, and fall into two broad categories:

  • Mandatory Cache → Cache data is not invalid, directly use the Cache data, with the response header Expires/ cache-control to indicate the invalid rule;
  • Comparison cache → the first request background will be returned to the client together with the cache identification, the client again request data, the backup of the cache identification sent to the server, the server according to the identification to judge, return 304 on behalf of the client can use the cache data, other means can not use the cache data. Then the cache identifier is generally divided into two types: resource modification time → last-modified/if-modified-since, background generated resource identifier → Etag/ if-none-match, the latter has a higher priority than the former.

The two types of caches can be mixed! The flow chart of the HTTP cache policy is as follows:

I only have a general idea about this, and I am interested in further understanding it.

Volley also uses this caching strategy:

Then look at the Cache definition interface, which contains an Entity Cache Entity and cache-related operations.

To implement the DiskBasedCache class, let’s look at put() :

To summarize the process, write the message header and the response data in turn to a file, and then store a message header into the cache collection, which is used to determine whether the cache is available at request time. Then look at get() :

Understandably, after a cache hit, the response Entity is returned from the stream → byte array → Entity instance. Otherwise Null is returned, indicating that no cache is available. Leaving aside the other methods, how is this CacheKey designed?

NetworkDispatcher → processRequest() :

Request → getCacheKey() :

So there are two kinds of keys:

  • He GET | | DEPRECATED_GET_OR_POST – direct url
  • Other → request method number – URL

Direct URL do Key, really simple and crude ah!


Request handling details

Back to Volley’s entry method newRequestQueue(), TSK, one of the extensibility features:

HurlStack corresponds to the encapsulation of HttpUrlConnection, HttpClientStack corresponds to the encapsulation of HttpClient, but also can be customized, inherit BaseHttpStack class, Rewrite executeRequest() to return the processed response.

In addition, there is a layer of proxy mode, and the proxy class is BasicNetwork, which does some additional operations on top of HttpClientStack.

So, this additional operation is: get the cache-relevant request header, and handle the response result/exception. Then, seeing the exception retries are called attemptRetryOnException() :

RetryPolicy is the RetryPolicy interface:

If you look at where this interface is implemented, following the default implementation class DefaultRetryPolicy, you can see that the default maximum number of retries is 1:

A retry() call counts up by one, and if <= the maximum retry count, an exception is thrown

So you can see that we’re just doing a count here, and we’re not doing the request, or the request to join the queue, so how does the request get reinitiated?

Notice that the loop, and the hierarchy of retry methods, is in a catch, and if the number of retries exceeds at this point, you throw an exception and you get out of the loop.

Won’t it crash? Of course not, because there is a try-catch layer in the NetworkDispatcher that calls this method:

When is wonderful!!

One final point: Volley uses an in-memory cache to hold the requested data, rather than creating an area of memory to hold it directly.

Why is that? Because network requests are generally very frequent, constantly creating byte[] will cause frequent GC, which indirectly affects APP performance. So Volley defines a ByteArrayPool to cache data.

It’s a little confusing, isn’t it? Let me just say this briefly:

  • The default size of the cache pool is DEFAULT_POOL_SIZE = 4096 bytes = 4KB. The pool maintains two lists, one sorted by the order of recent use (table 1) and one sorted by the size of byte[] (table 2).
  • Select space from cache: do not create space directly, first iterate to find whether there is space in list â‘¡ required by len>=, return this space if there is, return to create a new space if there is no;
  • Return space to the cache: check whether the inserted data exceeds the boundary, if it does, return it directly, if it does not, add it to the end of table â‘ , then binary search to find the appropriate position in table â‘¡ to insert, the number of bytes currently opened increases. At the same time, determine whether it exceeds the maximum value, is the first element of the recycling table â‘¡;

0x6 scalability

Take a look at the available interfaces and abstract classes to see what you can customize

The RequestQueue constructor passes:

  • Cache → Cache, the default is to put the response data to disk, you can make a second and third level Cache, implementation interface;
  • Network → request framework encapsulation, the default seal HttpUrlConnection and HttpClient, make other request library can also;
  • ResponseDelivery → ResponseDelivery, is the subsequent processing of the request response, normal and abnormal handling;

The Request constructor is passed in

  • RetryPolicy → RetryPolicy for request failures;

other

  • Request → Request. There are common implementation classes in toolbox, such as JsonRequest, ImageRequest, StringRequest, etc.
  • Authenticator → Token Authorization

That’s about it. There is also a picture loading tool in Toolbox, but the performance is not very good, SO I won’t go into it.


0 x7, summary

This section of the request library Volley design for a variety of disassembly, benefited a lot, at least now let me package a request library, will not be unable to start.

Source code difficult to chew, but understand the principle of design, will feel suddenly enlightened, wonderful ah, also comply with the former boss said, product classic project source code, such as product classic classics, refreshing ~

References:

  • Understanding browser Caching

  • Volley Official Documentation

  • Android Asynchronous Network Request Framework -Volley