About the author

Guo Xiaoxing is a programmer and guitarist. He is mainly engaged in the infrastructure work of The Android platform. Welcome to exchange technical questions.

The article directories

  • A request and response process
    • 1.1 Request encapsulation
    • 1.2 Sending a request
    • 1.3 Scheduling of requests
  • 2 the interceptor
    • 2.1 RetryAndFollowUpInterceptor
    • 2.2 BridgeInterceptor
    • 2.3 CacheInterceptor
    • 2.4 ConnectInterceptor
    • 2.5 CallServerInterceptor
  • Three connection mechanism
    • 3.1 Establishing a Connection
    • 3.2 the connection pool
  • Four Caching mechanism
    • 4.1 Cache Policy
    • 4.2 Cache Management

For more Android Open Framework source analysis, see Android Open Framework Analysis.

Back in the slash-and-burn days of Android, we typically used HttpURLConnection or Apache HTTP Client for network requests, both of which support HTTPS, stream upload and download, configuration timeout, and connection pooling. However, with the responsible business scenario and the need to optimize traffic consumption, Okhttp came into being, since its birth, has been very good reputation.

But, everybody said, what’s good about it? Since it is so good, what are the design concepts and implementation ideas worth us to learn? 🤔

Today, with these questions, to find out.

An HTTP+HTTP/2 client for Android and Java applications.

The official website: https://github.com/square/okhttp

Source version: 3.9.1

Before looking at the source code, let’s take a look at a simple example that will lead us through the implementation of Okhttp.

👉 example

OkHttpClient okHttpClient = new OkHttpClient.Builder()
        .build();
Request request = new Request.Builder()
        .url(url)
        .build();
okHttpClient.newCall(request).enqueue(new Callback() {
    @Override
    public void onFailure(Call call, IOException e) {}@Override
    public void onResponse(Call call, Response response) throws IOException {}});Copy the code

In the example above, we build a client OkHttpClient and a Request, and then call the newCall() method to send the Request. From this small example, we can see that OkHttpClient acts as a context or butler, which receives the task we give it, and distributes the specific work to various subsystems to complete.

The subsystem hierarchy of Okhttp is shown below:

👉 Click on the image for a larger view

  • Network configuration layer: Use Builder mode to configure various parameters, such as timeout, interceptor, etc., which are distributed by Okhttp to the required subsystems.
  • Redirection layer: responsible for redirection.
  • Header concatenation layer: responsible for transforming user-constructed requests into requests sent to the server and the response returned by the server into a user-friendly response.
  • HTTP cache layer: reads the cache and updates the cache.
  • Connection layer: The connection layer is a more complex layer that implements network protocols, internal interceptors, security authentication, connection and connection pooling, etc. But this layer does not initiate the actual connection, it just does some parameters of the connector.
  • Data response layer: responsible for reading response data from the server.

There are several key roles to understand in the overall Okhttp system:

  • OkHttpClient: communication client, which is used to centrally manage the initiation of requests and the resolution of responses.
  • Call: Call is an interface that is an abstract description of an HTTP request. The concrete implementation class is RealCall, which is created by the CallFactory.
  • Request: Encapsulates the specific information of the Request, such as url and header.
  • RequestBody: A RequestBody used to refer to requests for communication, forms, and other information.
  • Response: indicates the Response to the HTTP request, obtaining the Response information, such as Response header, etc.
  • ResponseBody: the ResponseBody of an HTTP request that is read once and then closed, so repeated calls to responsebody.string () to retrieve the result of the request will return an error.
  • Interceptor: Interceptor is a request Interceptor, responsible for intercepting and processing requests. It integrates network request, caching, transparent compression and other functions. Each function is an Interceptor, and all interceptors are finally connected into an Interceptor. A typical chain of responsibility pattern is implemented.
  • StreamAllocation: Controls the allocation and release of Connections and Streas resources.
  • RouteSelector: Select routes with automatic reconnection.
  • RouteDatabase: Records the blacklist of routes that fail to connect.

Let’s first analyze the connected request and response flow so that we can get a sense of the Okhttp system as a whole.

A request and response process

The entire Request and response process of Okhttp is that the Dispatcher is constantly fetching requests (calls) from the Request Queue. According to whether the Request data has been stored in the cache, the Request data is obtained from the memory cache or the server, the requests are divided into synchronous and asynchronous. A synchronous request returns a Response to the current request directly by calling call.exectute (), and an asynchronous request adds AsyncCall to the request queue by calling call.enqueue (), and obtains the result returned by the server through a Callback.

A picture is worth a thousand words. Let’s take a look at the entire flow chart, as follows:

👉 Click on the image for a larger view

Readers carefully look at this flowchart, is it very like the OSI seven-layer model of computer network? Okhttp officially adopts this idea, using Interceptor to vertically layer the whole framework, simplify the design logic, and improve the extensibility of the framework.

From the above flowchart, we can see that in the entire request and response process, we need to focus on the following points:

  • How does the Dispatcher dispatch requests?
  • How are the interceptors implemented?
  • How are connections and connection pools established and maintained?

With the above problems, we go to the source code to find out.

Let’s take a look at the specific function call chain. The sequence diagram of the request and response looks like this:

👉 Click on the image for a larger view

To help us understand the details of the entire request and response process, let’s first look at a request and how it is encapsulated and emitted.

1.1 Request encapsulation

Requests are made by Okhttp, and the actual requests are encapsulated in the RealCall implementation class of the Call interface, as shown below:

The Call interface is as follows:

public interface Call extends Cloneable {
    
  // Return the current request
  Request request(a);

  // Synchronize the request method, which blocks the current thread until the request result is put back
  Response execute(a) throws IOException;

  // The asynchronous request method, which queues the request and waits for it to return
  void enqueue(Callback responseCallback);

  // Cancel the request
  void cancel(a);

  This method returns true after execute() or enqueue(Callback responseCallback) has executed
  boolean isExecuted(a);

  // Whether the request was cancelled
  boolean isCanceled(a);

  // Create a new request that is exactly the same
  Call clone(a);

  interface Factory {
    Call newCall(Request request); }}Copy the code

A RealCall is constructed as follows:

final class RealCall implements Call {
    
  private RealCall(OkHttpClient client, Request originalRequest, boolean forWebSocket) {
    // We built OkHttpClient to pass parameters
    this.client = client;
    this.originalRequest = originalRequest;
    // Is it a WebSocket request? WebSocket is used to establish a persistent connection.
    this.forWebSocket = forWebSocket;
    / / build RetryAndFollowUpInterceptor interceptors
    this.retryAndFollowUpInterceptor = newRetryAndFollowUpInterceptor(client, forWebSocket); }}Copy the code

RealCall implements the Call interface, which encapsulates the Call to the request, and the logic of this constructor is simple: Assignment outside of the incoming OkHttpClient, Request and forWebSocket, and create the retry and redirect RetryAndFollowUpInterceptor interceptor.

1.2 Sending a request

RealCall divides requests into two types:

  • A synchronous request
  • An asynchronous request

Asynchronous requests simply have more callbacks than synchronous requests. Each method is called as follows:

An asynchronous request

final class RealCall implements Call {
    
      @Override public void enqueue(Callback responseCallback) {
        synchronized (this) {
          if (executed) throw new IllegalStateException("Already Executed");
          executed = true;
        }
        captureCallStackTrace();
        client.dispatcher().enqueue(newAsyncCall(responseCallback)); }}Copy the code

A synchronous request

final class RealCall implements Call {
    @Override public Response execute(a) throws IOException {
      synchronized (this) {
        if (executed) throw new IllegalStateException("Already Executed");
        executed = true;
      }
      captureCallStackTrace();
      try {
        client.dispatcher().executed(this);
        Response result = getResponseWithInterceptorChain();
        if (result == null) throw new IOException("Canceled");
        return result;
      } finally {
        client.dispatcher().finished(this); }}}Copy the code

As you can see from the above implementation, both synchronous and asynchronous requests are handled by the Dispatcher:

  • Synchronous request: Execute directly and return the result of the request
  • Asynchronous request: Construct an AsyncCall and add itself to the processing queue.

AsyncCall is essentially a Runable, and the Dispatcher dispatches the ExecutorService to execute those runables.

final class AsyncCall extends NamedRunnable {
    private final Callback responseCallback;

    AsyncCall(Callback responseCallback) {
      super("OkHttp %s", redactedUrl());
      this.responseCallback = responseCallback;
    }

    String host(a) {
      return originalRequest.url().host();
    }

    Request request(a) {
      return originalRequest;
    }

    RealCall get(a) {
      return RealCall.this;
    }

    @Override protected void execute(a) {
      boolean signalledCallback = false;
      try {
        Response response = getResponseWithInterceptorChain();
        if (retryAndFollowUpInterceptor.isCanceled()) {
          signalledCallback = true;
          responseCallback.onFailure(RealCall.this.new IOException("Canceled"));
        } else {
          signalledCallback = true;
          responseCallback.onResponse(RealCall.this, response); }}catch (IOException e) {
        if (signalledCallback) {
          // Do not signal the callback twice!
          Platform.get().log(INFO, "Callback failure for " + toLoggableString(), e);
        } else {
          responseCallback.onFailure(RealCall.this, e); }}finally {
        client.dispatcher().finished(this); }}}Copy the code

Can be seen from the above code, either synchronous or asynchronous request finally by getResponseWithInterceptorChain () to obtain the Response, asynchronous request only a thread scheduling, asynchronous execution process.

Let’s look at the implementation in Dispatcher first.

1.3 Scheduling of requests

public final class Dispatcher {
    
      private int maxRequests = 64;
      private int maxRequestsPerHost = 5;
    
      /** Ready async calls in the order they'll be run. */
      private final Deque<AsyncCall> readyAsyncCalls = new ArrayDeque<>();
    
      /** Running asynchronous calls. Includes canceled calls that haven't finished yet. */
      private final Deque<AsyncCall> runningAsyncCalls = new ArrayDeque<>();
    
      /** Running synchronous calls. Includes canceled calls that haven't finished yet. */
      private final Deque<RealCall> runningSyncCalls = new ArrayDeque<>();
      
      /** Used by {@code Call#execute} to signal it is in-flight. */
      synchronized void executed(RealCall call) {
        runningSyncCalls.add(call);
      }

      synchronized void enqueue(AsyncCall call) {
      // No more than 64 asynchronous requests are running, and no more than 5 asynchronous requests are running on the same host
      if (runningAsyncCalls.size() < maxRequests && runningCallsForHost(call) < maxRequestsPerHost) {
        runningAsyncCalls.add(call);
        executorService().execute(call);
      } else{ readyAsyncCalls.add(call); }}}Copy the code

The Dispatcher is a task scheduler that internally maintains three dual-end queues:

  • ReadyAsyncCalls: Asynchronous requests ready to run
  • RunningAsyncCalls: Asynchronous requests that are running
  • RunningSyncCalls: Ongoing synchronization requests

Remember asynchronous requests and synchronous bull rides, and use the ExecutorService to schedule and execute AsyncCall.

Synchronous requests add requests directly to the running synchronous request queue runningSyncCalls. Asynchronous requests make a judgment:

If there are no more than 64 asynchronous requests running and no more than 5 asynchronous requests under the same host, add the request to the running synchronous request queue runningAsyncCalls and start executing the request, otherwise add it to readyAsyncCalls and wait.

Finished the Dispatcher in the implementation, we continue to see getResponseWithInterceptorChain () the implementation of this method is really a request and process the request.

1.4 Request processing

final class RealCall implements Call {
      Response getResponseWithInterceptorChain(a) throws IOException {
        // Build a full stack of interceptors.
        List<Interceptor> interceptors = new ArrayList<>();
        // You can see here that our custom Interceptor will be executed first
        interceptors.addAll(client.interceptors());
        // Add retry and redirect to bad machines
        interceptors.add(retryAndFollowUpInterceptor);
        interceptors.add(new BridgeInterceptor(client.cookieJar()));
        interceptors.add(new CacheInterceptor(client.internalCache()));
        interceptors.add(new ConnectInterceptor(client));
        if(! forWebSocket) { interceptors.addAll(client.networkInterceptors()); } interceptors.add(new CallServerInterceptor(forWebSocket));
    
        Interceptor.Chain chain = new RealInterceptorChain(
            interceptors, null.null.null.0, originalRequest);
        returnchain.proceed(originalRequest); }}Copy the code

In just a few lines of code, the Interceptor completes all the processing process of the request. The Interceptor integrates network request, caching, transparent compression and other functions. Its implementation adopts the chain of responsibility mode, with each performing its own duties. They are finally connected to form an Interceptor.chain. Their functions are as follows:

  • RetryAndFollowUpInterceptor: responsible for redirection.
  • BridgeInterceptor: Is responsible for converting user-constructed requests into requests sent to the server, and converting the response returned by the server into a user-friendly response.
  • CacheInterceptor: Reads and updates the cache.
  • ConnectInterceptor: Is responsible for establishing a connection with the server.
  • CallServerInterceptor: Is responsible for reading response data from the server.

Location decision function, the position of the first, a final copy with the server communication, request from RetryAndFollowUpInterceptor layer upon layer is passed to the CallServerInterceptor, each layer of the request make corresponding processing, Processing structure from CallServerInterceptor layers back to RetryAndFollowUpInterceptor, the red request initiator server returned results are obtained.

The interceptors are the core function of Okhttp. Let’s analyze the implementation of each of the interceptors.

2 the interceptor

From the above process, we can see that each link is handled by the corresponding Interceptor, all interceptors (including our custom) implement the Interceptor interface, as shown below:

public interface Interceptor {
  Response intercept(Chain chain) throws IOException;

  interface Chain {
    Request request(a);

    Response proceed(Request request) throws IOException;
    
    // Return the connection returned after the Request was executed
    @Nullable Connection connection(a); }}Copy the code

Okhttp’s built-in interceptor looks like this:

  • Retry and redirect RetryAndFollowUpInterceptor: responsible for the failure.
  • BridgeInterceptor: Is responsible for converting user-constructed requests into requests sent to the server, and converting the response returned by the server into a user-friendly response.
  • CacheInterceptor: Reads and updates the cache.
  • ConnectInterceptor: Is responsible for establishing a connection with the server.
  • CallServerInterceptor: Is responsible for reading response data from the server.

Let’s continue to see how the RealInterceptorChain in the level of processing.

public final class RealInterceptorChain implements Interceptor.Chain {
    
     public Response proceed(Request request, StreamAllocation streamAllocation, HttpCodec httpCodec, RealConnection connection) throws IOException {
        if (index >= interceptors.size()) throw new AssertionError();
    
        calls++;
    
        // If we already have a stream, confirm that the incoming request will use it.
        if (this.httpCodec ! =null&&!this.connection.supportsUrl(request.url())) {
          throw new IllegalStateException("network interceptor " + interceptors.get(index - 1)
              + " must retain the same host and port");
        }
    
        // If we already have a stream, confirm that this is the only call to chain.proceed().
        if (this.httpCodec ! =null && calls > 1) {
          throw new IllegalStateException("network interceptor " + interceptors.get(index - 1)
              + " must call proceed() exactly once");
        }
    
        // Call the next interceptor in the chain.
        RealInterceptorChain next = new RealInterceptorChain(
            interceptors, streamAllocation, httpCodec, connection, index + 1, request);
        Interceptor interceptor = interceptors.get(index);
        Response response = interceptor.intercept(next);
    
        // Confirm that the next interceptor made its required call to chain.proceed().
        if(httpCodec ! =null && index + 1< interceptors.size() && next.calls ! =1) {
          throw new IllegalStateException("network interceptor " + interceptor
              + " must call proceed() exactly once");
        }
    
        // Confirm that the intercepted response isn't null.
        if (response == null) {
          throw new NullPointerException("interceptor " + interceptor + " returned null");
        }
    
        returnresponse; }}Copy the code

This method is interesting. After calling the Proceed method, it will continue to build a new RealInterceptorChain object and call the next interceptor to continue the request until all interceptors are processed and the response is returned.

Every interceptor method follows the rule:

@Override public Response intercept(Chain chain) throws IOException {
    Request request = chain.request();
    //1 Request phase, what the interceptor is responsible for doing in the Request phase

    / / 2 call RealInterceptorChain. Proceed (), is under the recursive call an interceptor intercept () method
    response = ((RealInterceptorChain) chain).proceed(request, streamAllocation, null.null);

    //3 The Response phase completes what the interceptor is responsible for in the Response phase, and then returns to the interceptor at the previous level.
    returnresponse; }}Copy the code

From the above description, we can see that the Request is processed forward in the order of the interpretor, while the Response is processed backward. This refers to the principles of the OSI seven-tier model. We’ve already mentioned that. CallServerInterceptor is equivalent to the lowest physical layer. The request is delivered from the top layer to the top layer, and the response is returned from the bottom layer to the top layer. It’s a beautiful design.

The order in which the Interceptor executes: RetryAndFollowUpInterceptor – > BridgeInterceptor – > CacheInterceptor – > ConnectInterceptor – > CallServerInterceptor.

2.1 RetryAndFollowUpInterceptor

Retry and redirect RetryAndFollowUpInterceptor is responsible for the failure.

public final class RetryAndFollowUpInterceptor implements Interceptor {
    
    private static final int MAX_FOLLOW_UPS = 20;
    
     @Override public Response intercept(Chain chain) throws IOException {
        Request request = chain.request();
    
        //1. Create a StreamAllocation object. StreamAllocation is a management class and is maintained
        //Connections, Streams, and Calls. This class initializes a Socket connection object and gets input/output stream objects.
        streamAllocation = new StreamAllocation(
            client.connectionPool(), createAddress(request.url()), callStackTrace);
    
        // Number of redirects
        int followUpCount = 0;
        Response priorResponse = null;
        while (true) {
          if (canceled) {
            streamAllocation.release();
            throw new IOException("Canceled");
          }
    
          Response response = null;
          boolean releaseConnection = true;
          try {
            //2. Proceed to the next Interceptor, BridgeInterceptor
            response = ((RealInterceptorChain) chain).proceed(request, streamAllocation, null.null);
            releaseConnection = false;
          } catch (RouteException e) {
            //3. If an exception is thrown, check whether the connection can continue.
            if(! recover(e.getLastConnectException(),false, request)) {
              throw e.getLastConnectException();
            }
            releaseConnection = false;
            continue;
          } catch (IOException e) {
            // Failed to establish a connection with the server
            booleanrequestSendStarted = ! (einstanceof ConnectionShutdownException);
            if(! recover(e, requestSendStarted, request))throw e;
            releaseConnection = false;
            continue;
          } finally {
            // Other unknown exceptions are detected, the connection and resources are released
            if (releaseConnection) {
              streamAllocation.streamFailed(null); streamAllocation.release(); }}// Build the response body, which has an empty body.
          if(priorResponse ! =null) {
            response = response.newBuilder()
                .priorResponse(priorResponse.newBuilder()
                        .body(null)
                        .build())
                .build();
          }
    
          / / 4. The Request is processed according to the response code. If the returned Request is not empty, the Request is redirected.
          Request followUp = followUpRequest(response);
    
          if (followUp == null) {
            if(! forWebSocket) { streamAllocation.release(); }return response;
          }
    
          closeQuietly(response.body());
    
          // The number of redirects cannot exceed 20
          if (++followUpCount > MAX_FOLLOW_UPS) {
            streamAllocation.release();
            throw new ProtocolException("Too many follow-up requests: " + followUpCount);
          }
    
          if (followUp.body() instanceof UnrepeatableRequestBody) {
            streamAllocation.release();
            throw new HttpRetryException("Cannot retry streamed HTTP body", response.code());
          }
    
          if(! sameConnection(response, followUp.url())) { streamAllocation.release(); streamAllocation =new StreamAllocation(
                client.connectionPool(), createAddress(followUp.url()), callStackTrace);
          } else if(streamAllocation.codec() ! =null) {
            throw new IllegalStateException("Closing the body of " + response
                + " didn't close its backing stream. Bad interceptor?"); } request = followUp; priorResponse = response; }}}Copy the code

Let’s start with the StreamAllocation class. This class coordinates the relationship between the three entity classes: StreamAllocation

  • Connections: The physical socket that connects to a remote server. This socket connection can be slow, so it has a cancellation mechanism.
  • Streams: Defines logical HTTP request/response pairs, with each connection defining the maximum number of concurrent Streams they can carry, one at a time for HTTP/1.x and multiple at a time for HTTP/2.
  • Calls: Defines a logical sequence of flows. This sequence is usually an initial request and its redirect requests. For the same connection, we usually put all flows in one call to unify their behavior.

Let’s look at the process of the whole method again:

  1. Build a StreamAllocation object. StreamAllocation is a management class that maintains the management of Connections, Streams, and Calls. This class initializes a Socket connection object and gets input/output stream objects.
  2. Proceed to the next Interceptor, the BridgeInterceptor
  3. If an exception is thrown, the connection is checked to see if it can continue, and the following cases will not be retried:
  • If the client configuration fails, no retry is required
  • Request body cannot be sent again after an error
  • The connection cannot be restored if the following Exception occurs:
    • ProtocolException: indicates a ProtocolException
    • InterruptedIOException: InterruptedIOException
    • SSLHandshakeException: SSL handshake exception
    • SSLPeerUnverifiedException: SSL handshake unauthorized anomalies
  • There are no more lines to choose from 4. The Request is processed according to the response code. If the returned Request is not empty, the Request is redirected. The number of redirects cannot exceed 20.

Finally, the request header is processed according to the response code, which is completed by followUpRequest() method, as shown below:

public final class RetryAndFollowUpInterceptor implements Interceptor {
      private Request followUpRequest(Response userResponse) throws IOException {
        if (userResponse == null) throw newIllegalStateException(); Connection connection = streamAllocation.connection(); Route route = connection ! =null
            ? connection.route()
            : null;
        int responseCode = userResponse.code();
    
        final String method = userResponse.request().method();
        switch (responseCode) {
          //407, proxy authentication
          caseHTTP_PROXY_AUTH: Proxy selectedProxy = route ! =null
                ? route.proxy()
                : client.proxy();
            if(selectedProxy.type() ! = Proxy.Type.HTTP) {throw new ProtocolException("Received HTTP_PROXY_AUTH (407) code while not using proxy");
            }
            return client.proxyAuthenticator().authenticate(route, userResponse);
          //401, not certified
          case HTTP_UNAUTHORIZED:
            return client.authenticator().authenticate(route, userResponse);
          / / 307308
          case HTTP_PERM_REDIRECT:
          case HTTP_TEMP_REDIRECT:
            // "If the 307 or 308 status code is received in response to a request other than GET
            // or HEAD, the user agent MUST NOT automatically redirect the request"
            if(! method.equals("GET") && !method.equals("HEAD")) {
              return null;
            }
            // fall-through
          // 300,301,302,303
          case HTTP_MULT_CHOICE:
          case HTTP_MOVED_PERM:
          case HTTP_MOVED_TEMP:
          case HTTP_SEE_OTHER:
              
            // Whether the client allows redirection in the configuration
            if(! client.followRedirects())return null;
    
            String location = userResponse.header("Location");
            if (location == null) return null;
            HttpUrl url = userResponse.request().url().resolve(location);
    
            // If the URL is null, redirection is not allowed
            if (url == null) return null;
    
            // Query whether there is a redirect between HTTP and HTTPS
            boolean sameScheme = url.scheme().equals(userResponse.request().url().scheme());
            if(! sameScheme && ! client.followSslRedirects())return null;
    
            // Most redirects don't include a request body.
            Request.Builder requestBuilder = userResponse.request().newBuilder();
            if (HttpMethod.permitsRequestBody(method)) {
              final boolean maintainBody = HttpMethod.redirectsWithBody(method);
              if (HttpMethod.redirectsToGet(method)) {
                requestBuilder.method("GET".null);
              } else {
                RequestBody requestBody = maintainBody ? userResponse.request().body() : null;
                requestBuilder.method(method, requestBody);
              }
              if(! maintainBody) { requestBuilder.removeHeader("Transfer-Encoding");
                requestBuilder.removeHeader("Content-Length");
                requestBuilder.removeHeader("Content-Type"); }}// When redirecting across hosts, drop all authentication headers. This
            // is potentially annoying to the application layer since they have no
            // way to retain them.
            if(! sameConnection(userResponse, url)) { requestBuilder.removeHeader("Authorization");
            }
    
            return requestBuilder.url(url).build();
          / / 408, timeout
          case HTTP_CLIENT_TIMEOUT:
            // 408's are rare in practice, but some servers like HAProxy use this response code. The
            // spec says that we may repeat the request without modifications. Modern browsers also
            // repeat the request (even non-idempotent ones.)
            if (userResponse.request().body() instanceof UnrepeatableRequestBody) {
              return null;
            }
    
            return userResponse.request();
    
          default:
            return null; }}}Copy the code

Redirect will involve some network programming knowledge, without complete understanding, here as long as you know RetryAndFollowUpInterceptor role is to deal with some abnormal connection and redirect. Let’s move on to the next BridgeInterceptor.

2.2 BridgeInterceptor

BridgeInterceptor is just what it’s named, it’s a connection bridge that converts user-constructed requests into requests sent to the server, and the response returned by the server into a user-friendly response. The process of transformation is to add some header information that the server needs.

public final class BridgeInterceptor implements Interceptor {
    @Override public Response intercept(Chain chain) throws IOException {
        Request userRequest = chain.request();
        Request.Builder requestBuilder = userRequest.newBuilder();
    
        RequestBody body = userRequest.body();
        if(body ! =null) {
          //1 Wraps the Header
          MediaType contentType = body.contentType();
          if(contentType ! =null) {
            requestBuilder.header("Content-Type", contentType.toString());
          }
    
          long contentLength = body.contentLength();
          if(contentLength ! = -1) {
            requestBuilder.header("Content-Length", Long.toString(contentLength));
            requestBuilder.removeHeader("Transfer-Encoding");
          } else {
            requestBuilder.header("Transfer-Encoding"."chunked");
            requestBuilder.removeHeader("Content-Length"); }}if (userRequest.header("Host") = =null) {
          requestBuilder.header("Host", hostHeader(userRequest.url(), false));
        }
    
        if (userRequest.header("Connection") = =null) {
          requestBuilder.header("Connection"."Keep-Alive");
        }
    
        // There is a hole in this: if you actively added "accept-encoding: gzip" to the request, then you have to unzip it yourself
        // You did not blow unzip, or cause Response.string () garbled.
        // If we add an "Accept-Encoding: gzip" header field we're responsible for also decompressing
        // the transfer stream.
        boolean transparentGzip = false;
        if (userRequest.header("Accept-Encoding") = =null && userRequest.header("Range") = =null) {
          transparentGzip = true;
          requestBuilder.header("Accept-Encoding"."gzip");
        }
    
        // Create the cookieJar configured by OkhttpClient
        List<Cookie> cookies = cookieJar.loadForRequest(userRequest.url());
        if(! cookies.isEmpty()) { requestBuilder.header("Cookie", cookieHeader(cookies));
        }
    
        if (userRequest.header("User-Agent") = =null) {
          requestBuilder.header("User-Agent", Version.userAgent());
        }
    
        Response networkResponse = chain.proceed(requestBuilder.build());
    
        // The Header returned by the server is parsed. If there is no cookie, it is not parsed
        HttpHeaders.receiveHeaders(cookieJar, userRequest.url(), networkResponse.headers());
    
        Response.Builder responseBuilder = networkResponse.newBuilder()
            .request(userRequest);
    
        // Check whether the server supports gZIP compression and, if so, submit the compression to the Okio library for processing
        if (transparentGzip
            && "gzip".equalsIgnoreCase(networkResponse.header("Content-Encoding"))
            && HttpHeaders.hasBody(networkResponse)) {
          GzipSource responseBody = new GzipSource(networkResponse.body().source());
          Headers strippedHeaders = networkResponse.headers().newBuilder()
              .removeAll("Content-Encoding")
              .removeAll("Content-Length")
              .build();
          responseBuilder.headers(strippedHeaders);
          responseBuilder.body(new RealResponseBody(strippedHeaders, Okio.buffer(responseBody)));
        }
    
        returnresponseBuilder.build(); }}Copy the code

As its name suggests, it is a bridge that transforms user-constructed requests into requests sent to the server and the responses returned by the server into user-friendly responses. In the Request phase, configure the user information and add some Request headers. In the Response phase, gZIP is decompressed.

This method does a little bit of work with the headers, but there are a few things to note about it: accept-encoding, “gzip”

  • Accept-encoding: gzip is automatically added when the developer does not add it
  • Automatically add Accept-encoding, which automatically decompresses request and Response
  • Add accept-encoding manually without decompressing
  • Automatic decompression removes content-length, so the upper-layer Java code wants the contentLength to be -1
  • Remove Content-encoding during automatic decompression
  • When automatic decompression is performed, transfer-encoding: chunked is not affected if the Encoding is transferred in blocks.

BridgeInterceptor does a little bit of work on headers, so let’s move on to CacheInterceptor.

2.3 CacheInterceptor

We know that Okhttp has its own caching mechanism in order to save traffic and improve response speed, and CacheInterceptor is responsible for reading and updating the cache.

public final class CacheInterceptor implements Interceptor {
    
     @Override public Response intercept(Chain chain) throws IOException {
         
        //1. Read the candidate cache. How to read is described below.Response cacheCandidate = cache ! =null
            ? cache.get(chain.request())
            : null;
    
        long now = System.currentTimeMillis();
    
        //2. Create a cache policy, enforce cache, compare cache, etc. We will talk about cache policy next.
        CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
        Request networkRequest = strategy.networkRequest;
        Response cacheResponse = strategy.cacheResponse;
    
        if(cache ! =null) {
          cache.trackResponse(strategy);
        }
    
        if(cacheCandidate ! =null && cacheResponse == null) {
          closeQuietly(cacheCandidate.body());
        }
    
        //3. According to the policy, the network is not used and there is no cache, and error code 504 is returned.
        if (networkRequest == null && cacheResponse == null) {
          return new Response.Builder()
              .request(chain.request())
              .protocol(Protocol.HTTP_1_1)
              .code(504)
              .message("Unsatisfiable Request (only-if-cached)")
              .body(Util.EMPTY_RESPONSE)
              .sentRequestAtMillis(-1L)
              .receivedResponseAtMillis(System.currentTimeMillis())
              .build();
        }
    
        //4. According to the policy, do not use the network, there is a cache of direct return.
        if (networkRequest == null) {
          return cacheResponse.newBuilder()
              .cacheResponse(stripBody(cacheResponse))
              .build();
        }
    
        Response networkResponse = null;
        try {
          //5. If the first two interceptors do not return, the next Interceptor, ConnectInterceptor, will continue.
          networkResponse = chain.proceed(networkRequest);
        } finally {
          // If an I/O exception occurs, the cache is released
          if (networkResponse == null&& cacheCandidate ! =null) { closeQuietly(cacheCandidate.body()); }}//6. The network result is received. If the response code type 304 is received, the cache is used and the cache result is returned.
        if(cacheResponse ! =null) {
          if (networkResponse.code() == HTTP_NOT_MODIFIED) {
            Response response = cacheResponse.newBuilder()
                .headers(combine(cacheResponse.headers(), networkResponse.headers()))
                .sentRequestAtMillis(networkResponse.sentRequestAtMillis())
                .receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
                .cacheResponse(stripBody(cacheResponse))
                .networkResponse(stripBody(networkResponse))
                .build();
            networkResponse.body().close();
    
            cache.trackConditionalCacheHit();
            cache.update(cacheResponse, response);
            return response;
          } else{ closeQuietly(cacheResponse.body()); }}//7. Read the network result.
        Response response = networkResponse.newBuilder()
            .cacheResponse(stripBody(cacheResponse))
            .networkResponse(stripBody(networkResponse))
            .build();
    
        //8. Cache data.
        if(cache ! =null) {
          if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
            // Offer this request to the cache.
            CacheRequest cacheRequest = cache.put(response);
            return cacheWritingResponse(cacheRequest, response);
          }
    
          if (HttpMethod.invalidatesCache(networkRequest.method())) {
            try {
              cache.remove(networkRequest);
            } catch (IOException ignored) {
              // The cache cannot be written.}}}//9. Return the network read result.
        returnresponse; }}Copy the code

The process of the whole method is as follows:

  1. Read candidate caches, and we’ll talk about how we do that.
  2. Create caching policies, enforce caching, compare caching, etc., which we’ll talk about next.
  3. According to the policy, the network is not used, and there is no cache direct error, and return error code 504.
  4. According to the policy, do not use the network, there is a cache of direct return.
  5. If the first two interceptors do not return, the next Interceptor, called the ConnectInterceptor, continues.
  6. The network result is received. If the response is code 304, the cache is used and the cached result is returned.
  7. Read network results.
  8. Cache the data.
  9. Returns the result of a network read.

Let’s move on to ConnectInterceptor.

2.4 ConnectInterceptor

In RetryAndFollowUpInterceptor initialize a StreamAllocation object, we say this StreamAllocation object initialization in a Socket object is used to make the connection, but there is no real connection, Wait until the hader and cache information are processed before calling ConnectInterceptor to make the actual connection

public final class ConnectInterceptor implements Interceptor {
    
      @Override public Response intercept(Chain chain) throws IOException {
        RealInterceptorChain realChain = (RealInterceptorChain) chain;
        Request request = realChain.request();
        StreamAllocation streamAllocation = realChain.streamAllocation();
    
        booleandoExtensiveHealthChecks = ! request.method().equals("GET");
        // Create an output stream
        HttpCodec httpCodec = streamAllocation.newStream(client, doExtensiveHealthChecks);
        // Establish the connection
        RealConnection connection = streamAllocation.connection();
    
        returnrealChain.proceed(request, streamAllocation, httpCodec, connection); }}Copy the code

ConnectInterceptor establishes a connection in the Request phase. It is also simple by creating two objects:

  • HttpCodec: Used to encode HTTP requests and decode HTTP responses
  • RealConnection: Connection object that initiates a connection to the server.

In fact, this contains the entire Okhttp connection mechanism, such as connection, connection pool, and so on, we will leave it in the following separately, first continue to look at the last Interceptor: CallServerInterceptor.

2.5 CallServerInterceptor

The CallServerInterceptor is responsible for reading response data from the server.

public final class CallServerInterceptor implements Interceptor {
    
    @Override public Response intercept(Chain chain) throws IOException {
        
        // These objects are created in the previous Interceptor
        RealInterceptorChain realChain = (RealInterceptorChain) chain;
        HttpCodec httpCodec = realChain.httpStream();
        StreamAllocation streamAllocation = realChain.streamAllocation();
        RealConnection connection = (RealConnection) realChain.connection();
        Request request = realChain.request();
    
        long sentRequestMillis = System.currentTimeMillis();
        //1. Write the request header
        httpCodec.writeRequestHeaders(request);
    
        Response.Builder responseBuilder = null;
        if(HttpMethod.permitsRequestBody(request.method()) && request.body() ! =null) {
          // If there's a "Expect: 100-continue" header on the request, wait for a "HTTP/1.1 100
          // Continue" response before transmitting the request body. If we don't get that, return what
          // we did get (such as a 4xx response) without ever transmitting the request body.
          if ("100-continue".equalsIgnoreCase(request.header("Expect"))) {
            httpCodec.flushRequest();
            responseBuilder = httpCodec.readResponseHeaders(true);
          }
    
          //2 Write the request body
          if (responseBuilder == null) {
            // Write the request body if the "Expect: 100-continue" expectation was met.
            Sink requestBodyOut = httpCodec.createRequestBody(request, request.body().contentLength());
            BufferedSink bufferedRequestBody = Okio.buffer(requestBodyOut);
            request.body().writeTo(bufferedRequestBody);
            bufferedRequestBody.close();
          } else if(! connection.isMultiplexed()) {// If the "Expect: 100-continue" expectation wasn't met, prevent the HTTP/1 connection from
            // being reused. Otherwise we're still obligated to transmit the request body to leave the
            // connection in a consistent state.
            streamAllocation.noNewStreams();
          }
        }
    
        httpCodec.finishRequest();
    
        //3 Read the response header
        if (responseBuilder == null) {
          responseBuilder = httpCodec.readResponseHeaders(false);
        }
    
        Response response = responseBuilder
            .request(request)
            .handshake(streamAllocation.connection().handshake())
            .sentRequestAtMillis(sentRequestMillis)
            .receivedResponseAtMillis(System.currentTimeMillis())
            .build();
    
        //4 Read the response body
        int code = response.code();
        if (forWebSocket && code == 101) {
          // Connection is upgrading, but we need to ensure interceptors see a non-null response body.
          response = response.newBuilder()
              .body(Util.EMPTY_RESPONSE)
              .build();
        } else {
          response = response.newBuilder()
              .body(httpCodec.openResponseBody(response))
              .build();
        }
    
        if ("close".equalsIgnoreCase(response.request().header("Connection"))
            || "close".equalsIgnoreCase(response.header("Connection"))) {
          streamAllocation.noNewStreams();
        }
    
        if ((code == 204 || code == 205) && response.body().contentLength() > 0) {
          throw new ProtocolException(
              "HTTP " + code + " had non-zero Content-Length: " + response.body().contentLength());
        }
    
        returnresponse; }}Copy the code

Now that we have connected to the server via the ConnectInterceptor, we can write the request data and read the return data. The whole process:

  1. Write request header
  2. Write request body
  3. Read response header
  4. Read response body

In future articles, we will examine the caching, connection, and editing mechanisms of Okhttp.

Three connection mechanism

Connection of creation is completed under StreamAllocation object as a whole, we also said it was created, as early as RetryAndFollowUpInterceptor StreamAllocation object Mainly used to manage two key roles:

  • RealConnection: An object that actually establishes a connection, using a Socket to establish a connection.
  • ConnectionPool: A pool used to manage and reuse connections.

We say that the StreamAllocation object initializes a Socket object for connection, but it does not

3.1 Creating a Connection

As we mentioned in the previous ConnectInterceptor analysis, onnectInterceptor is used to complete the connection. Real connections are managed by a ConnectPool, which holds a maximum of five keep-Alives, each of which lasts for five minutes, and asynchronous threads to clean up invalid connections.

This is mainly accomplished by the following two methods:

  1. HttpCodec httpCodec = streamAllocation.newStream(client, doExtensiveHealthChecks);
  2. RealConnection connection = streamAllocation.connection();

Let’s look at it in detail.

StreamAllocation. NewStream () ultimately mobilize findConnect () method to establish the connection.

public final class StreamAllocation {
    
      /** * Returns a connection to host a new stream. This prefers the existing connection if it exists, * then the pool, finally building a new connection. */
      private RealConnection findConnection(int connectTimeout, int readTimeout, int writeTimeout,
          boolean connectionRetryEnabled) throws IOException {
        Route selectedRoute;
        synchronized (connectionPool) {
          if (released) throw new IllegalStateException("released");
          if(codec ! =null) throw new IllegalStateException("codec ! = null");
          if (canceled) throw new IOException("Canceled");
    
          //1 Check whether the connection is complete
          RealConnection allocatedConnection = this.connection;
          if(allocatedConnection ! =null && !allocatedConnection.noNewStreams) {
            return allocatedConnection;
          }
    
          //2 Specifies whether to use available connections in the connection pool
          Internal.instance.get(connectionPool, address, this.null);
          if(connection ! =null) {
            return connection;
          }
    
          selectedRoute = route;
        }
    
        // Thread selection, multiple IP operations
        if (selectedRoute == null) {
          selectedRoute = routeSelector.next();
        }
    
        //3 If no connection is available, create one
        RealConnection result;
        synchronized (connectionPool) {
          if (canceled) throw new IOException("Canceled");
    
          // Now that we have an IP address, make another attempt at getting a connection from the pool.
          // This could match due to connection coalescing.
          Internal.instance.get(connectionPool, address, this, selectedRoute);
          if(connection ! =null) {
            route = selectedRoute;
            return connection;
          }
    
          // Create a connection and assign it to this allocation immediately. This makes it possible
          // for an asynchronous cancel() to interrupt the handshake we're about to do.
          route = selectedRoute;
          refusedStreamCount = 0;
          result = new RealConnection(connectionPool, selectedRoute);
          acquire(result);
        }
    
        //4 Start the TCP and TLS handshake
        result.connect(connectTimeout, readTimeout, writeTimeout, connectionRetryEnabled);
        routeDatabase().connected(result.route());
    
        //5 Place the newly created connection in the connection pool
        Socket socket = null;
        synchronized (connectionPool) {
          // Pool the connection.
          Internal.instance.put(connectionPool, result);
    
          // If another multiplexed connection to the same address was created concurrently, then
          // release this connection and acquire that one.
          if (result.isMultiplexed()) {
            socket = Internal.instance.deduplicate(connectionPool, address, this);
            result = connection;
          }
        }
        closeQuietly(socket);
    
        returnresult; }}Copy the code

The whole process is as follows:

  1. To find if a full connection is available:
  • Socket not closed
  • The input stream is not closed
  • The output stream is not closed
  • The Http2 connection is not closed
  1. Whether any connections are available in the connection pool. If yes, yes.
  2. If no connection is available, create your own.
  3. Start the TCP connection and TLS handshake.
  4. Add the newly created connection to the connection pool.

This method creates a RealConnection object and then calls its connect() method to establish the connection. Let’s look at the realConnection.connect () method implementation.

public final class RealConnection extends Http2Connection.Listener implements Connection {
    
    public void connect(
         int connectTimeout, int readTimeout, int writeTimeout, boolean connectionRetryEnabled) {
       if(protocol ! =null) throw new IllegalStateException("already connected");
   
       // Route selection
       RouteException routeException = null;
       List<ConnectionSpec> connectionSpecs = route.address().connectionSpecs();
       ConnectionSpecSelector connectionSpecSelector = new ConnectionSpecSelector(connectionSpecs);
   
       if (route.address().sslSocketFactory() == null) {
         if(! connectionSpecs.contains(ConnectionSpec.CLEARTEXT)) {throw new RouteException(new UnknownServiceException(
               "CLEARTEXT communication not enabled for client"));
         }
         String host = route.address().url().host();
         if(! Platform.get().isCleartextTrafficPermitted(host)) {throw new RouteException(new UnknownServiceException(
               "CLEARTEXT communication to " + host + " not permitted by network security policy")); }}// Start the connection
       while (true) {
         try {
            // In channel mode, the channel connection is established
           if (route.requiresTunnel()) {
             connectTunnel(connectTimeout, readTimeout, writeTimeout);
           } 
           // Otherwise, the Socket connection is used, which is usually the case
           else {
             connectSocket(connectTimeout, readTimeout);
           }
           // Set up an HTTPS connection
           establishProtocol(connectionSpecSelector);
           break;
         } catch (IOException e) {
           closeQuietly(socket);
           closeQuietly(rawSocket);
           socket = null;
           rawSocket = null;
           source = null;
           sink = null;
           handshake = null;
           protocol = null;
           http2Connection = null;
   
           if (routeException == null) {
             routeException = new RouteException(e);
           } else {
             routeException.addConnectException(e);
           }
   
           if(! connectionRetryEnabled || ! connectionSpecSelector.connectionFailed(e)) {throwrouteException; }}}if(http2Connection ! =null) {
         synchronized(connectionPool) { allocationLimit = http2Connection.maxConcurrentStreams(); }}}/** Does all the work necessary to build a full HTTP or HTTPS connection on a raw socket. */
      private void connectSocket(int connectTimeout, int readTimeout) throws IOException {
        Proxy proxy = route.proxy();
        Address address = route.address();
    
        // Handles sockets according to the type of proxy
        rawSocket = proxy.type() == Proxy.Type.DIRECT || proxy.type() == Proxy.Type.HTTP
            ? address.socketFactory().createSocket()
            : new Socket(proxy);
    
        rawSocket.setSoTimeout(readTimeout);
        try {
          // Set up the Socket connection
          Platform.get().connectSocket(rawSocket, route.socketAddress(), connectTimeout);
        } catch (ConnectException e) {
          ConnectException ce = new ConnectException("Failed to connect to " + route.socketAddress());
          ce.initCause(e);
          throw ce;
        }
    
        The following try/catch block is a pseudo hacky way to get around a crash on Android 7.0
        // More details:
        // https://github.com/square/okhttp/issues/3245
        // https://android-review.googlesource.com/#/c/271775/
        try {
          // Get the input/output stream
          source = Okio.buffer(Okio.source(rawSocket));
          sink = Okio.buffer(Okio.sink(rawSocket));
        } catch (NullPointerException npe) {
          if (NPE_THROW_WITH_NULL.equals(npe.getMessage())) {
            throw newIOException(npe); }}}}Copy the code

The end result is to call the connect() method on the Socket in Java.

3.2 the connection pool

We know that in a responsible network environment, the frequent establishment of Sokcet connections (TCP three-way handshake) and the disconnection of sockets (TCP four-way break) are a waste of network resources and time. Keepalive connections in HTTP are very important to reduce latency and improve speed.

Reuse of connections requires connection management, which introduces the concept of connection pooling.

Okhttp supports five concurrent KeepAlives. the default link life is five minutes. The ConectionPool is implemented to recycle and manage connections.

ConectionPool maintains a thread pool internally to clean connections, as shown below:

public final class ConnectionPool {
    
        private static final Executor executor = new ThreadPoolExecutor(0 /* corePoolSize */,
          Integer.MAX_VALUE /* maximumPoolSize */.60L /* keepAliveTime */, TimeUnit.SECONDS,
          new SynchronousQueue<Runnable>(), Util.threadFactory("OkHttp ConnectionPool".true));
      
        // Clean up the connection, called in the thread pool executor.
        private final Runnable cleanupRunnable = new Runnable() {
          @Override public void run(a) {
            while (true) {
              // Perform the cleanup and return the next cleanup time.
              long waitNanos = cleanup(System.nanoTime());
              if (waitNanos == -1) return;
              if (waitNanos > 0) {
                long waitMillis = waitNanos / 1000000L;
                waitNanos -= (waitMillis * 1000000L);
                synchronized (ConnectionPool.this) {
                  try {
                    // Release the lock within timeout time
                    ConnectionPool.this.wait(waitMillis, (int) waitNanos);
                  } catch (InterruptedException ignored) {
                  }
                }
              }
            }
          }
        };
}
Copy the code

The ConectionPool maintains an internal pool of threads to cleanup the connection. The cleanup() method is a blocking operation that performs the cleanup first and returns the time between the next cleanup and calls wait() to release the lock. When the time is up, clean up again, and return to the next time to clean up, and so on.

Let’s look at a concrete implementation of the cleanup() method.

public final class ConnectionPool {
    
      long cleanup(long now) {
        int inUseConnectionCount = 0;
        int idleConnectionCount = 0;
        RealConnection longestIdleConnection = null;
        long longestIdleDurationNs = Long.MIN_VALUE;
    
     
        synchronized (this) {
            // Iterate through all connections, marking inactive connections.
          for (Iterator<RealConnection> i = connections.iterator(); i.hasNext(); ) {
            RealConnection connection = i.next();
    
            //1. Query the number of references to StreanAllocation within the connection.
            if (pruneAndGetAllocationCount(connection, now) > 0) {
              inUseConnectionCount++;
              continue;
            }
    
            idleConnectionCount++;
    
            //2. Mark idle connections.
            long idleDurationNs = now - connection.idleAtNanos;
            if(idleDurationNs > longestIdleDurationNs) { longestIdleDurationNs = idleDurationNs; longestIdleConnection = connection; }}if (longestIdleDurationNs >= this.keepAliveDurationNs
              || idleConnectionCount > this.maxIdleConnections) {
            //3. If the number of idle connections exceeds five or the keepalive time exceeds five minutes, clear the connection.
            connections.remove(longestIdleConnection);
          } else if (idleConnectionCount > 0) {
            //4. Return the connection expiration time for cleaning next time.
            return keepAliveDurationNs - longestIdleDurationNs;
          } else if (inUseConnectionCount > 0) {
            //5. All connections are active. Clean up after 5 minutes.
            return keepAliveDurationNs;
          } else {
            //6. Break the loop without any connection.
            cleanupRunning = false;
            return -1; }}//7. Close the connection, return time 0, and immediately clean up again.
        closeQuietly(longestIdleConnection.socket());
        return 0; }}Copy the code

The process of the whole method is as follows:

  1. Queries the number of references to StreanAllocation within this connection.
  2. Indicates a free connection.
  3. If there are more than five idle connections or the keepalive time is longer than five minutes, the connection is cleared.
  4. Returns the expiration time of this connection for the next cleanup.
  5. All active connections, clean up after 5 minutes.
  6. Without any connections, break out of the loop.
  7. Close the connection, return time 0, and clean up again immediately.

There is a list of StreamAllocation virtual references in RealConnection. Each time a StreamAllocation is created, it will be added to the list. If the StreamAllocation object is left closed, it will be removed from the list. Using reference counting to determine whether a connection is free,

public final List<Reference<StreamAllocation>> allocations = new ArrayList<>();
Copy the code

To find the reference count by pruneAndGetAllocationCount () method, the specific implementation is as follows:

public final class ConnectionPool {
    
     private int pruneAndGetAllocationCount(RealConnection connection, long now) {
       // Virtual reference list
       List<Reference<StreamAllocation>> references = connection.allocations;
       // Iterate through the list of virtual references
       for (int i = 0; i < references.size(); ) {
         Reference<StreamAllocation> reference = references.get(i);
         // If virtual reference StreamAllocation is in use, skip the next loop,
         if(reference.get() ! =null) {
           // Reference count
           i++;
           continue;
         }
   
         // We've discovered a leaked allocation. This is an application bug.
         StreamAllocation.StreamAllocationReference streamAllocRef =
             (StreamAllocation.StreamAllocationReference) reference;
         String message = "A connection to " + connection.route().address().url()
             + " was leaked. Did you forget to close a response body?";
         Platform.get().logCloseableLeak(message, streamAllocRef.callStackTrace);
   
         // Otherwise remove the StreamAllocation reference
         references.remove(i);
         connection.noNewStreams = true;
   
         // If all StreamAllocation references are gone, return the reference count of 0
         if (references.isEmpty()) {
           connection.idleAtNanos = now - keepAliveDurationNs;
           return 0; }}// Return the size of the reference list as a reference count
       returnreferences.size(); }}Copy the code

Four Caching mechanism

3.1 Cache Policy

Before looking at Okhttp’s caching mechanism, let’s review HTTP’s caching theory, which is the basis for implementing Okhttp.

The HTTP caching mechanism also depends on the parameter classes in the request and response headers. The final response is pulled from the cache or from the server. The process of the HTTP caching mechanism is as follows:

👉 Click on the image for a larger view

HTTP caches can be divided into two types:

  • Mandatory caching: The server needs to be involved in determining whether to continue using the Cache. When the client first requests the data, the server returns the expiration time (Expires and cache-control). If the data is not expired, the Cache can continue to be used.
  • Contrast cache: When the client requests data for the first time, the server returns the cache identifier (last-modified/if-modified-since and Etag/ if-none-match) along with the data. The client backs up both to the cache. When requesting data again, the client sends the cache id of the last backup to the server. The server makes a judgment based on the cache ID. If 304 is returned, the client is notified that the cache can continue to be used.

Force caching takes precedence over contrast caching.

The above mentioned two flags to force the cache to use:

  • Expires: The Expires value is the expiration time returned by the server. That is, when the request time is less than the expiration time returned by the server, the cached data is directly used. The expiration time is generated by the server, and the client and server may have errors.
  • Cache-control: Expires has a time verification problem. All HTTP1.1 uses cache-control instead of Expires.

The value of cache-control can be:

  • Private: The client can cache.
  • Public: Both the client and proxy server can cache.
  • Max-age = XXX: The cached content will be invalid after XXX seconds
  • No-cache: Comparison cache is required to verify cached data.
  • No-store: Nothing will be cached, neither force caching nor comparison caching will be triggered.

Let’s look at comparing the cache’s two identifiers:

Last-Modified/If-Modified-Since

Last-modified Indicates the time when the resource was Last Modified.

When the client sends the first request, the server returns the last time the resource was modified:

Last-Modified: Tue, 12 Jan 2016 09:31:27 GMT
Copy the code

When the client sends it again, it carries if-modified-since in the header. Uploads the last resource time returned by the server to the server.

If-Modified-Since: Tue, 12 Jan 2016 09:31:27 GMT 
Copy the code

Server receives the modification time from the client resources, and their current resources, comparing the modification time if their resources from the modification time is greater than the client resources, modified time, then resources change, it returns 200 say need to request resources, otherwise returns 304 indicates resource has not been modified, can continue to use the cache.

The preceding method indicates whether a resource has been Modified with a timestamp, and another method indicates whether a resource has been Modified with an ETag. If the ETag changes, the resource has been Modified, and the ETag has a higher priority than last-Modified.

Etag/If-None-Match

ETag is a resource file identifier. When the client sends the first request, the server returns the current resource identifier:

ETag: "5694c7ef-24dc"
Copy the code

If the client sends it again, it will carry the resource id that the server returned last time in the header:

If-None-Match:"5694c7ef-24dc"
Copy the code

After receiving the resource ID from the client, the server compares it with its own resource id. If the resource ID is different, it indicates that the resource has been modified and returns 200. If the resource ID is the same, it indicates that the resource has not been modified and returns 304.

So that’s the theory of the HTTP caching strategy, let’s take a look at the implementation.

The caching strategy of Okhttp is implemented according to the above flowchart. The specific implementation class is CacheStrategy. The CacheStrategy constructor takes two arguments:

CacheStrategy(Request networkRequest, Response cacheResponse) {
this.networkRequest = networkRequest;
this.cacheResponse = cacheResponse;
}
Copy the code

The meanings of the two parameters are as follows:

  • NetworkRequest: indicates a networkRequest.
  • CacheResponse: cacheResponse, a file cache based on the DiskLruCache implementation. It can be the md5 of the url in the request. Value is the cache found in the file, which we’ll talk about next.

CacheStrategy uses these two parameters to generate a final policy, a bit like a Map operation, which inputs networkRequest and cacheResponse values, processes them, and prints them out. The combination looks something like this:

  • If networkRequest is null, cacheResponse is null: only-if-cached(indicates that no networkRequest is made, the cache does not exist or has expired, and a 503 error is returned).
  • If networkRequest is null and cacheResponse is non-null, no networkRequest is made and the cache is available. The cache is returned directly without networkRequest.
  • If networkRequest is non-null and cacheResponse is null, the networkRequest needs to be made and the cache does not exist or has expired. You can directly access the network.
  • If networkRequest is non-NULL and cacheResponse is non-NULL, the Header contains the ETag/ last-Modified tag and needs to be used in the conditional request or the network needs to be accessed.

So how are these four cases determined? Let’s see.

A CacheStrategy is constructed using the Factory pattern. After the cacheStrategy.factory object is built, the specific CacheStrategy is obtained by calling its get() method. CacheStrategy. Factory. The get () method of internal call is CacheStrategy Factory. GetCandidate () method, it is the core of the implementation.

As follows:

public static class Factory {
    
        private CacheStrategy getCandidate(a) {
          //1. If the cache is not hit, the network request is directly made.
          if (cacheResponse == null) {
            return new CacheStrategy(request, null);
          }
    
          //2. If the TLS handshake information is lost, a direct connection is returned.
          if (request.isHttps() && cacheResponse.handshake() == null) {
            return new CacheStrategy(request, null);
          }

          //3. Determine whether direct access is performed according to the response status code, Expired time and whether there is a no-cache label.
          if(! isCacheable(cacheResponse, request)) {return new CacheStrategy(request, null);
          }
    
          //4. If the request header contains "no-cache" or the right condition GET request (with ETag/Since tag in the header), connect directly.
          CacheControl requestCaching = request.cacheControl();
          if (requestCaching.noCache() || hasConditions(request)) {
            return new CacheStrategy(request, null);
          }
    
          CacheControl responseCaching = cacheResponse.cacheControl();
          if (responseCaching.immutable()) {
            return new CacheStrategy(null, cacheResponse);
          }
    
          // Calculate the timestamp of the current age: now-sent + age
          long ageMillis = cacheResponseAge();
          // Refresh time, usually set to max-age for the server
          long freshMillis = computeFreshnessLifetime();
    
          if(requestCaching.maxAgeSeconds() ! = -1) {
            // The value is Max -age
            freshMillis = Math.min(freshMillis, SECONDS.toMillis(requestCaching.maxAgeSeconds()));
          }
    
          long minFreshMillis = 0;
          if(requestCaching.minFreshSeconds() ! = -1) {
            // Generally, the value is 0
            minFreshMillis = SECONDS.toMillis(requestCaching.minFreshSeconds());
          }
    
          long maxStaleMillis = 0;
          if(! responseCaching.mustRevalidate() && requestCaching.maxStaleSeconds() ! = -1) {
            maxStaleMillis = SECONDS.toMillis(requestCaching.maxStaleSeconds());
          }
    
          //5. If the cache exists within the expiration period and can be used directly, the last cache is returned directly.
          if(! responseCaching.noCache() && ageMillis + minFreshMillis < freshMillis + maxStaleMillis) { Response.Builder builder = cacheResponse.newBuilder();if (ageMillis + minFreshMillis >= freshMillis) {
              builder.addHeader("Warning"."110 HttpURLConnection \"Response is stale\"");
            }
            long oneDayMillis = 24 * 60 * 60 * 1000L;
            if (ageMillis > oneDayMillis && isFreshnessLifetimeHeuristic()) {
              builder.addHeader("Warning"."113 HttpURLConnection \"Heuristic expiration\"");
            }
            return new CacheStrategy(null, builder.build());
          }
    
          //6. If the cache has expired and contains information such as an ETag, a request is sent with conditions such as if-none-match, if-modified-since, and if-modified-since
          // Leave it to the server
          String conditionName;
          String conditionValue;
          if(etag ! =null) {
            conditionName = "If-None-Match";
            conditionValue = etag;
          } else if(lastModified ! =null) {
            conditionName = "If-Modified-Since";
            conditionValue = lastModifiedString;
          } else if(servedDate ! =null) {
            conditionName = "If-Modified-Since";
            conditionValue = servedDateString;
          } else {
            return new CacheStrategy(request, null); // No condition! Make a regular request.
          }
    
          Headers.Builder conditionalRequestHeaders = request.headers().newBuilder();
          Internal.instance.addLenient(conditionalRequestHeaders, conditionName, conditionValue);
    
          Request conditionalRequest = request.newBuilder()
              .headers(conditionalRequestHeaders.build())
              .build();
          return newCacheStrategy(conditionalRequest, cacheResponse); }}Copy the code

The logic of the whole function is implemented according to the above HTTP cache decision flowchart, and the specific process is as follows:

  1. If the cache does not hit, the network request is made directly.
  2. If the TLS handshake information is lost, a direct connection is returned.
  3. Direct access is determined based on the response status code, Expired time and whether there is a no-cache label.
  4. If the request header contains “no-cache” or the right conditional GET request (with ETag/Since tags in the header), then connect directly.
  5. If the cache exists within the expiration time and can be used directly, the last cache is returned directly.
  6. If the cache expires and contains information such as an ETag, the server sends if-none-match, if-modified-since, and if-modified-since conditional requests to the server for judgment

The entire process is like this. In addition, the caching of Okhttp is done automatically according to the server headers, and the entire process is written according to the RFC document, so there is no need for manual control by the client.

With the caching strategy in mind, let’s look at how caching is managed on disk.

3.2 Cache Management

In this article, we will analyze the caching mechanism of Okhttp. The caching mechanism is based on DiskLruCache. The Cache class encapsulates the implementation of caching and implements the InternalCache interface.

The InternalCache interface is as follows:

InternalCache

public interface InternalCache {
  // Get the cache
  Response get(Request request) throws IOException;
  // Save to cache
  CacheRequest put(Response response) throws IOException;
  // Remove the cache
  void remove(Request request) throws IOException;
  // Update the cache
  void update(Response cached, Response network);
  // Trace a GET request that satisfies the cache criteria
  void trackConditionalCacheHit(a);
  // Trace the responses that meet the CacheStrategy
  void trackResponse(CacheStrategy cacheStrategy);
}
Copy the code

Let’s look at its implementation class.

Instead of implementing InternalCache directly, the Cache implements InternalCache’s anonymous inner class. The inner class’s methods call the methods of the Cache, as shown below:

final InternalCache internalCache = new InternalCache() {
@Override public Response get(Request request) throws IOException {
  return Cache.this.get(request);
}

@Override public CacheRequest put(Response response) throws IOException {
  return Cache.this.put(response);
}

@Override public void remove(Request request) throws IOException {
  Cache.this.remove(request);
}

@Override public void update(Response cached, Response network) {
  Cache.this.update(cached, network);
}

@Override public void trackConditionalCacheHit(a) {
  Cache.this.trackConditionalCacheHit();
}

@Override public void trackResponse(CacheStrategy cacheStrategy) {
  Cache.this.trackResponse(cacheStrategy); }};InternalCache internalCache(a) {
returncache ! =null ? cache.internalCache : internalCache;
}
Copy the code

‘There are also inner classes defined within the Cache class that encapsulate request and response information.

  • The Cache Entry: Encapsulates request and response information, including URL, varyHeaders, Protocol, code, Message, responseHeaders, Handshake, sentRequestMillis, and receivedResponseMillis.
  • Cache.CacheResponseBody: inherits from the ResponseBody and encapsulates the Cache snapshot, ResponseBody bodySource, contentType, and contentLength.

In addition to the two classes, Okhttp also encapsulates a FileSystem class, the FileSystem class, which uses the Okio library to encapsulate Java FIle operations and simplify IO operations. With that in mind, all that’s left is the insert cache, get cache, and delete cache operations in DiskLruCahe.

For this part of the content, you can refer to the content of 07Android open source framework source analysis: LruCache and DiskLruCache.

Ok, that’s all we have to say about Okhttp. It’s a very well-designed library, and there’s a lot to learn from it. In the next post, we’ll look at its great partner, Retrofit.