preface

In the interview, OkHttp, as the third party library we basically need to use, is also a very important test point, so the master of its principle will also improve our ability to a certain extent.

Square.github. IO/OkHttp /

The basic use

The first paragraph introduces the use of OkHttp, which is directly pulled from the official website of the use of the method. Because in general use, the background may compare the session or cookie to determine whether the current user is the same as the cached user, so the project as a whole uses the singleton pattern to create OkHttpClient objects.

OkHttpClient client = new OkHttpClient();

String run(String url) throws IOException {
  Request request = new Request.Builder()
      .url(url)
      .build();

  try (Response response = client.newCall(request).execute()) {
    returnresponse.body().string(); }}Copy the code

The source code parsing

OkHttpClient client = new OkHttpClient.Builder().build();
Request request = new Request.Builder()
                .url(url)
                .build();

Call call = client.newCall(request);
call.enqueue(new Callback() {
        @Override
        public void onFailure(Call call, IOException e) {}@Override
        public void onResponse(Call call, Response response) throws IOException {}});Copy the code

This is the method we use in OKHTTP, and the entire project will be resolved around the following five classes.

  • OkHttpClient: Global manager
  • Request: indicates the Request body
  • Call: initiates a request
  • Callback: data receiving channel
  • Response: Response data body

OkHttpClient, Request

The first is OkHttpClient and Request. Why do these two work together? Because both are constructed in the same way OkHttpClient is a global master and Request is a wrapper around the Request body.

public final class Request {
  final HttpUrl url; / / path
  final String method; // Request mode
  final Headers headers; / / request header
  final @Nullable RequestBody body; / / request body
  final Object tag;
}

public class OkHttpClient implements Cloneable.Call.Factory.WebSocket.Factory {
  final Dispatcher dispatcher; / / dispenser
  final @Nullable Proxy proxy; / / agent
  final List<Protocol> protocols; / / agreement
  final List<ConnectionSpec> connectionSpecs; // Transport layer version and connection protocol
  final List<Interceptor> interceptors; / / the interceptor
  final List<Interceptor> networkInterceptors; // Network interceptor
  final EventListener.Factory eventListenerFactory;
  final ProxySelector proxySelector; // Proxy selection
  final CookieJar cookieJar; //cookie
  final @Nullable Cache cache; / / cache
  final @Nullable InternalCache internalCache; // Internal cache
  final SocketFactory socketFactory; / / socket factory
  final @Nullable SSLSocketFactory sslSocketFactory; // Socket factory for HTTPS
  final @Nullable CertificateChainCleaner certificateChainCleaner;  // Verify that the response certificate applies to the host name of the HTTPS request connection.
  final HostnameVerifier hostnameVerifier; // Confirm the host name
  final CertificatePinner certificatePinner; / / certificate chain
  final Authenticator proxyAuthenticator; // Proxy authentication
  final Authenticator authenticator; // Local authentication
  final ConnectionPool connectionPool; // Connection pool, multiplexing connections
  final Dns dns; / / domain name
  final boolean followSslRedirects; // SSL redirection
  final boolean followRedirects; // Local redirect
  final boolean retryOnConnectionFailure; // Retry connection failed
  final int connectTimeout; // Connection timed out
  final int readTimeout; / / read timeout
  final int writeTimeout;  / / write timeout
  final int pingInterval;
}
Copy the code

You can see OkHttpClient has a lot of internal elements, but most of the time we don’t use them directly, because they do many layers of encapsulation themselves, and their mode of creating objects is also called builder design mode.

internal constructor(okHttpClient: OkHttpClient) : this() {
      this.dispatcher = okHttpClient.dispatcher
      this.connectionPool = okHttpClient.connectionPool
      this.interceptors += okHttpClient.interceptors
      //...
    }
Copy the code

A more general introduction to the Builder design pattern is to apply the data from our sketches to a real scenario.

val client = OkHttpClient.Builder().build()
// Call the Builder() function of Builder()
// Finally create the OkHttpClient object, our original data is stored in the OkHttpClient Builder
fun build(a): OkHttpClient = OkHttpClient(this)
Copy the code

But after all this time, there is still a problem. I don’t see him using the data. Don’t worry, now we’re going to start using it.

Call: the person who performs a task

Next up is the Call class. According to the template, we know that we need to stuff the wrapped Request body data into the OkHttpClient and return a Call.

@Override public Call newCall(Request request) {
    return RealCall.newRealCall(this, request, false /* for web socket */);
}
Copy the code

By entering the newCall() method, we know that the data returned is actually the interface that implements Call, a concrete class called RealCall. We don’t need to know the actual operation, we just need to know what the concrete class that is returned is, because all the rest of the operation is around a concrete thing. Look at the next sentence in the template call.enQueue (…) Entering the function, we can see the following function.

  override fun enqueue(responseCallback: Callback) {
    synchronized(this) { check(! executed) {"Already Executed" }
      // A Call can be executed only once
      executed = true
    }
    callStart()
    client.dispatcher.enqueue(AsyncCall(responseCallback)) / / 1 - >
  }
Copy the code

Everything else is fine. You can see the last line of code above, because we need to publish the task and get the data, so naturally we need a distributor and a channel to receive the feedback data, This is obviously the dispatcher we saw above in OkHttpClient and the Callback we defined externally ==> responseCallback.

internal fun enqueue(call: AsyncCall) {
    // Use synchronization to control the requested data
    synchronized(this) {
      readyAsyncCalls.add(call)

      // Make multiple requests to the same host to speed up query and reduce resource waste
      // He looks first for calls that are being executed and then for calls that are being executed
      if(! call.call.forWebSocket) { val existingCall = findExistingCallWithHost(call.host)if(existingCall ! =null) call.reuseCallsPerHostFrom(existingCall)
      }
    }
    promoteAndExecute() / / 1 = = >
  }
/ / 1 = = >
private fun promoteAndExecute(a): Boolean {
    this.assertThreadDoesntHoldLock()

    val executableCalls = mutableListOf<AsyncCall>()
    val isRunning: Boolean
    synchronized(this) {
      val i = readyAsyncCalls.iterator()
      // Prepare the data in the queue for processing
      while (i.hasNext()) {
        val asyncCall = i.next()

        // The number of running requests cannot exceed 64
        if (runningAsyncCalls.size >= this.maxRequests) break // Max capacity.
        // The number of hosts available is 5
        if (asyncCall.callsPerHost.get() >= this.maxRequestsPerHost) continue // Host max capacity.

        i.remove()
        asyncCall.callsPerHost.incrementAndGet()
        // The one to be run is put into the run queue
        executableCalls.add(asyncCall)
        runningAsyncCalls.add(asyncCall)
      }
      // Determine if there are any requests in progress in the current queue
      isRunning = runningCallsCount() > 0
    }
    // Each request that enters the run queue is formally run
    for (i in 0 until executableCalls.size) {
      val asyncCall = executableCalls[i]
      asyncCall.executeOn(executorService)
    }
    return isRunning
  }
Copy the code

ExecutorService: executorService: executorService: executorService: executorService: executorService: executorService: executorService: executorService: executorService: executorService: executorService

Tracing back to the source, we can find that he has already had the initialization operation in the Dispatcher.

@get:JvmName("executorService") val executorService: ExecutorService
    get() {
      if (executorServiceOrNull == null) {
        executorServiceOrNull = ThreadPoolExecutor(0.Int.MAX_VALUE, 60, TimeUnit.SECONDS,
            SynchronousQueue(), threadFactory("$okHttpName Dispatcher".false))}return executorServiceOrNull!!
    }
Copy the code

Say ThreadPoolExecutor when you see it, oh oh! Thread pool, but what thread pool does it look like? Go to the defined Executors class and check for the following code:

public static ExecutorService newCachedThreadPool(a) {
        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                      60L, TimeUnit.SECONDS,
                                      new SynchronousQueue<Runnable>());
    }
Copy the code

Isn’t it a bit like that? Well, we’ll assume it’s our CachedThreadPool thread pool.

Ok! Fine! Using a thread pool to perform asynchronous operations must indicate that there is a thread in the pool. What is the thread? Do we have points in mind? If not, it doesn’t matter, as we continue to draw out.

fun executeOn(executorService: ExecutorService) {
      client.dispatcher.assertThreadDoesntHoldLock()

      var success = false
      try {
        executorService.execute(this) / / (1)
        success = true
      } catch (e: RejectedExecutionException) {
        val ioException = InterruptedIOException("executor rejected")
        ioException.initCause(e)
        noMoreExchanges(ioException)
        responseCallback.onFailure(this@RealCall, ioException) / / (2)
      } finally {
        if(! success) { client.dispatcher.finished(this) / / (3)}}}Copy the code

Then it’s not a big problem, mainly see our comments 1, 2, 3.

  1. executorService.execute(this): For the thread pool, running is obviously a thread, andthisIt’s oursAsyncCall, through theAsyncCallThe observation that we can also know that it is inheritedRunnableSo the source of the asynchronous operation is already clear.
  2. responseCallback.onFailure()That is, it comes in through usCallbackReceive error feedback on data.
  3. client.dispatcher.finished(this): Why do you need this? In fact, he originally had a note in English that said,This call is no longer running!That is, this function is for notificationDispatcherourAsyncCallThe run is complete.

There’s a problem again, isn’t there? I’m worried. ResponseCallback () onResponse ();

So let’s make a guess, actually I looked at it and it’s almost the right answer. The onResponse() method appears in the run() function. Let’s move on to the code

override fun run(a) {
      threadName("OkHttp ${redactedUrl()}") {
        var signalledCallback = false
        timeout.enter()
        try {
          val response = getResponseWithInterceptorChain() / / (1)
          signalledCallback = true
          responseCallback.onResponse(this@RealCall, response) / / (2)
        } catch (e: IOException) {
            //...
            responseCallback.onFailure(this@RealCall, e)
        } catch (t: Throwable) {
            //...
            responseCallback.onFailure(this@RealCall, e)
        } finally {
          client.dispatcher.finished(this)}}}Copy the code

In comment (2) here, we are lucky to see the onResponse() method call. Ok, then the next question is, where does Response come from ????

The birth of the Response

Doesn’t it say? Bai getResponseWithInterceptorChain () in this function. Wow!!!!! Yeah, so where did it come from? 🤔 🤔 🤔

It’s code again, so annoying…

internal fun getResponseWithInterceptorChain(a): Response {
    // Build a full stack of interceptors.
    val interceptors = mutableListOf<Interceptor>()
    // corresponds to the interceptor we just started customizing
    interceptors += client.interceptors
    interceptors += RetryAndFollowUpInterceptor(client)
    interceptors += BridgeInterceptor(client.cookieJar)
    interceptors += CacheInterceptor(client.cache)
    interceptors += ConnectInterceptor
    // The forWebSocket flag has also appeared before
    // It is actually prepared by okhttp for long connections
    if(! forWebSocket) { interceptors += client.networkInterceptors } interceptors += CallServerInterceptor(forWebSocket)val chain = RealInterceptorChain(
        call = this,
        interceptors = interceptors,
        index = 0,
        exchange = null,
        request = originalRequest,
        connectTimeoutMillis = client.connectTimeoutMillis,
        readTimeoutMillis = client.readTimeoutMillis,
        writeTimeoutMillis = client.writeTimeoutMillis
    )

    val response = chain.proceed(originalRequest)
    return response
  }
Copy the code

In order to keep the code as concise as possible, I have captured some key code for your reference.

In fact, he gets the data through a bunch of interceptors, but obviously this is not the end of the line, because the return we see is still a function, indicating that the answer is still in the function. It is easy to see from observation that the concrete class for this operation is a class called RealInterceptorChain.

override fun proceed(request: Request): Response {
    // Call the next interceptor to return the corresponding data
    val next = copy(index = index + 1, request = request)
    val interceptor = interceptors[index]

    val response = interceptor.intercept(next)

    return response
  }
Copy the code

Response

CacheInterceptor interprets the source code of the CacheInterceptor

We’ll focus on the CacheInterceptor class here, whose intercept() method we intercept because it involves a response code that we’re likely to use frequently during interviews

override fun intercept(chain: Interceptor.Chain): Response {
    // Get the cached response based on the request we passed in
    valcacheCandidate = cache? .get(chain.request())

    val now = System.currentTimeMillis()
    // Get the current request is the network request, data cache condition
    val strategy = CacheStrategy.Factory(now, chain.request(), cacheCandidate).compute()
    val networkRequest = strategy.networkRequest
    valcacheResponse = strategy.cacheResponse cache? .trackResponse(strategy)if(cacheCandidate ! =null && cacheResponse == null) {
      // The cache candidate wasn't applicable. Close it.cacheCandidate.body? .closeQuietly() }// The network request and cache data are empty
    // HTTP_GATEWAY_TIMEOUT fails
    if (networkRequest == null && cacheResponse == null) {
      return Response.Builder()
          .request(chain.request())
          .protocol(Protocol.HTTP_1_1)
          .code(HTTP_GATEWAY_TIMEOUT)
          .message("Unsatisfiable Request (only-if-cached)")
          .body(EMPTY_RESPONSE)
          .sentRequestAtMillis(- 1L)
          .receivedResponseAtMillis(System.currentTimeMillis())
          .build()
    }

    // If there is no network, use our local data cache directly
    if (networkRequest == null) {
      returncacheResponse!! .newBuilder() .cacheResponse(stripBody(cacheResponse)) .build() }// Deploy the interceptor next in the chain of responsibility to retrieve data
    var networkResponse: Response? = null
    try {
      networkResponse = chain.proceed(networkRequest)
    } finally {
      // If we're crashing on I/O or otherwise, don't leak the cache body.
      if (networkResponse == null&& cacheCandidate ! =null) { cacheCandidate.body? .closeQuietly() } }// See if there is a local data cache
    if(cacheResponse ! =null) {
      // HTTP_NOT_MODIFIED:304, indicating that our local cache is up to date
      // There is no need to pull data from the server for update
      if(networkResponse? .code == HTTP_NOT_MODIFIED) {valresponse = cacheResponse.newBuilder() .headers(combine(cacheResponse.headers, networkResponse.headers)) .sentRequestAtMillis(networkResponse.sentRequestAtMillis) .receivedResponseAtMillis(networkResponse.receivedResponseAtMillis) .cacheResponse(stripBody(cacheResponse)) .networkResponse(stripBody(networkResponse)) .build() networkResponse.body!! .close()// Update the cache after combining headers but before stripping the
        // Content-Encoding header (as performed by initContentStream()).cache!! .trackConditionalCacheHit() cache.update(cacheResponse, response)return response
      } else{ cacheResponse.body? .closeQuietly() } }valresponse = networkResponse!! .newBuilder() .cacheResponse(stripBody(cacheResponse)) .networkResponse(stripBody(networkResponse)) .build()// Update our local cached data
    if(cache ! =null) {
      if (response.promisesBody() && CacheStrategy.isCacheable(response, networkRequest)) {
        // Offer this request to the cache.
        val cacheRequest = cache.put(response)
        return cacheWritingResponse(cacheRequest, response)
      }

      if (HttpMethod.invalidatesCache(networkRequest.method)) {
        try {
          cache.remove(networkRequest)
        } catch (_: IOException) {
          // The cache cannot be written.}}}return response
  }
Copy the code

conclusion

Finally, we use a diagram to complete the entire OkHttp workflow.