1. Basic use of OkHttp
OkHttp is Square’s Http and Http/2 client for Android and Java. To use dependencies, just add the following line to Gradle:
implementation 'com. Squareup. Okhttp3: okhttp: 3.11.0'
Copy the code
As we know, THERE are many types of Http requests. Commonly used are Get and Post, and Post is divided into Form and Multiple. Let’s take a look at the OkHttp API design logic for a Form request:
OkHttpClient internalHttpClient = new OkHttpClient();
FormBody.Builder formBodyBuilder = new FormBody.Builder();
RequestBody body = formBodyBuilder.build();
Request.Builder builder = new Request.Builder().url("host:port/url").post(body);
Request request = builder.build();
Response response = internalHttpClient.newCall(request).execute();
String retJson = response.body().string();
Copy the code
Here we first use the FormBody builder pattern to create the Request body of the Form Request, and then use the Request builder to create the full Form Request. We then use the created OkHttp client internalHttpClient to get a request and get Json data from the request body of the request.
According to the OkHttp API, if we want to send a Multipart request, we need to use the MultipartBody builder to create the Multipart request body. Then the full Multipart Request is created using the same builder of the Request, and the rest of the logic is the same.
In addition to directly instantiating an OkHttp client using the above method, we can also create an OkHttp client using the okHttpClient. Builder of OkHttpClient.
So, we can conclude:
- OkHttp provides a builder method for each request type to create the request body
RequestBody
; - Since the request body is only a part of the total request, use
Request.Builder
Build a request objectRequest
; - So we get a full Http request and use
OkHttpClient
Object to get the response objectResponse
.
OkHttp itself design is friendly, thinking is very clear, according to the above ideas to understand the other people’s API design logic, and then based on OkHttp encapsulate a library natural problem is not big.
2, OkHttp source analysis
Some of the basic API classes we mentioned above are intended for use by the user. The design of these classes is based only on the builder pattern and is very easy to understand. Again, our focus is not on these API classes, but on the request execution classes within OkHttp. Let’s start with a source analysis of the OkHttp request process (source version 3.10.0).
2.1 The general flow of a request
Referring to the previous example program, our clue should start with okHttpClient.newCall (Request) in terms of how the Request is sent, rather than the process of building the Request. Here is the definition of this method, which creates a RealCall object and passes in the OkHttpClient object and Request object as parameters:
@Override public Call newCall(Request request) {
return RealCall.newRealCall(this, request, false /* for web socket */);
}
Copy the code
RealCall then calls the internal static method newRealCall to create a RealCall instance in it and return it:
static RealCall newRealCall(OkHttpClient client, Request originalRequest, boolean forWebSocket) {
RealCall call = new RealCall(client, originalRequest, forWebSocket);
call.eventListener = client.eventListenerFactory().create(call);
return call;
}
Copy the code
Then, when we return the RealCall, we call its execute() method to get the response. Here is the definition of this method:
@Override public Response execute(a) throws IOException {
synchronized (this) {
if (executed) throw new IllegalStateException("Already Executed");
executed = true;
}
captureCallStackTrace();
eventListener.callStart(this);
try {
// Join a dual-end queue
client.dispatcher().executed(this);
// Take the Response from here
Response result = getResponseWithInterceptorChain();
if (result == null) throw new IOException("Canceled");
return result;
} catch (IOException e) {
eventListener.callFailed(this, e);
throw e;
} finally {
client.dispatcher().finished(this); }}Copy the code
Here we’ll get a dispatcher object using the dispatcher() method of the client object (which is actually the OkHttpClient that was passed in when we created the RealCall above). The executed() method is used to add the current RealCall to a dual-end queue. Here is the definition of the RealCall method, where the type of runningSyncCalls is Deque
:
synchronized void executed(RealCall call) {
runningSyncCalls.add(call);
}
Copy the code
Let’s go back to the above the execute () method, after the RealCall joining deque, we call again getResponseWithInterceptorChain () method, the following is the definition of the method.
Response getResponseWithInterceptorChain(a) throws IOException {
// Add a series of interceptors. Note the order in which they are added
List<Interceptor> interceptors = new ArrayList<>();
interceptors.addAll(client.interceptors());
interceptors.add(retryAndFollowUpInterceptor);
// Bridge interceptor
interceptors.add(new BridgeInterceptor(client.cookieJar()));
// Cache interceptor: retrieve data from the cache
interceptors.add(new CacheInterceptor(client.internalCache()));
// Network connection interceptor: Establishes a network connection
interceptors.add(new ConnectInterceptor(client));
if(! forWebSocket) { interceptors.addAll(client.networkInterceptors()); }// Server request interceptor: sends a request to the server for data
interceptors.add(new CallServerInterceptor(forWebSocket));
// Build a chain of responsibility
Interceptor.Chain chain = new RealInterceptorChain(interceptors, null.null.null.0,
originalRequest, this, eventListener, client.connectTimeoutMillis(),
client.readTimeoutMillis(), client.writeTimeoutMillis());
// Handle the chain of responsibility
return chain.proceed(originalRequest);
}
Copy the code
Here, we create a list object and add interceptors from the client, reconnection interceptors, bridge interceptors, cache interceptors, network connection interceptors, and server request interceptors in turn to the list. We then use this list to create a chain of interceptors. The chain of responsibility design pattern is used here, where each time an interceptor finishes executing, the next interceptor is called or not called and the result is returned. Obviously, the response we end up with is the result of the chain’s execution. When we customize an interceptor, it will be added to the interceptor chain.
Here we come across a number of new classes, such as RealCall, Dispatcher, and chain of responsibility. In the following sections, we will analyze the relationships between these classes and the links in the chain of responsibility, but here we will first make a general overview of the overall request flow. Here’s a rough sequence of the process:
2.2 Dispatcher
We mentioned the Dispatcher class above, which is used to distribute requests. In the initial example code, when using OkHttp, we create a RealCall and add it to the dual-end queue. Note, however, that the name of the dual-enqueued is runningSyncCalls, which means that the request is synchronous and will be executed immediately in the current thread. So, the following getResponseWithInterceptorChain () is the performance of the synchronization process. When we finish, we call the Dispatcher finished(RealCall) method to remove the request from the queue. Therefore, this synchronous request does not embody the “distribution” function of the distributor.
In addition to synchronous requests, there are asynchronous requests: when we get a RealCall, we call its enqueue(Callback responseCallback) method and set a Callback. The method executes the following line of code:
client.dispatcher().enqueue(new AsyncCall(responseCallback));
Copy the code
Use the above callback to create an AsyncCall and call enQueue (AsyncCall). Here AsyncCall indirectly inherits from Runnable, is an executable object, and calls AsyncCall’s execute() method within Runnable’s run() method. AsyncCall’s execute() method is similar to RealCall’s execute() method in that it uses a chain of responsibility to complete a network request. However, the latter can be executed in an asynchronous thread.
AsyncCall is also added to a queue when we call the Dispatcher enqueue(AsyncCall) method and removed from the queue when the request completes. It’s just that the queue here is either runningAsyncCalls or readyAsyncCalls. They are both a dual-ended queue and are used to store asynchronous type requests. The difference is that runningAsyncCalls are in-process queues, and when the in-process queue reaches its limit, it is placed in readyAsyncCalls, the ready queue:
synchronized void enqueue(AsyncCall call) {
if (runningAsyncCalls.size() < maxRequests && runningCallsForHost(call) < maxRequestsPerHost) {
runningAsyncCalls.add(call);
executorService().execute(call);
} else{ readyAsyncCalls.add(call); }}Copy the code
Once the request is added to the queue that is executing, we immediately use a thread pool to execute the AsyncCall. The chain of responsibility for the request is then executed asynchronously in a thread pool. The thread pool here is returned by the executorService() method:
public synchronized ExecutorService executorService(a) {
if (executorService == null) {
executorService = new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>(), Util.threadFactory("OkHttp Dispatcher".false));
}
return executorService;
}
Copy the code
Obviously, a thread pool is created when the thread pool does not exist. In addition to this, we can also customize a Dispacher when we build OkHttpClient and specify a thread pool for it in its constructor. Here we draw a sequence diagram of an asynchronous request similar to a synchronous request of OkHttp. You can see the difference between the two implementations by comparing the two diagrams:
That’s the logic of Dispacher, and it doesn’t seem that complicated. And from the above analysis, we can see that the actual request execution process is not completed here, here can only decide which thread to execute the request and cache the request in the dual-end queue, while the actual request execution process is completed in the chain of responsibility. Let’s examine the execution of the chain of responsibility in OkHttp.
2.3 Execution process of chain of responsibility
In a typical chain of responsibility design pattern, many objects are linked together in a chain by each object’s reference to its children. Requests are passed along the chain until one of the objects on the chain decides to process the request. The client that makes the request does not know which object on the chain will ultimately process the request, which allows the system to dynamically reorganize and assign responsibilities without affecting the client. Chain of responsibility in real life is a scenario of the interview, when one round of interviewers think you are not qualified for the next round of interviewers can reject you, otherwise will let the next round of interviewers continue to interview.
In OkHttp, the chain of responsibility is executed in a slightly different way. Here we will analyze how the chain of responsibility is implemented in OkHttp, and the logic of each chain will be explained later.
Going back to the 2.1 code, there are two things we need to note:
- Is when creating a chain of responsibility
RealInterceptorChain
The fifth argument we pass in is 0. This parameter is calledindex
, will be assigned toRealInterceptorChain
A global variable of the same name inside an instance. - Chain of Responsibility is called when it is enabled
proceed(Request)
Methods.
Here is the definition of the proceed(Request) method:
@Override public Response proceed(Request request) throws IOException {
return proceed(request, streamAllocation, httpCodec, connection);
}
Copy the code
Again, the internal overloaded proceed() method is called. Here we simplify the method:
public Response proceed(Request request, StreamAllocation streamAllocation, HttpCodec httpCodec, RealConnection connection) throws IOException {
if (index >= interceptors.size()) throw new AssertionError();
// ...
// Call the next interceptor in the chain of responsibility
RealInterceptorChain next = new RealInterceptorChain(interceptors, streamAllocation, httpCodec,
connection, index + 1, request, call, eventListener, connectTimeout, readTimeout,
writeTimeout);
Interceptor interceptor = interceptors.get(index);
Response response = interceptor.intercept(next);
// ...
return response;
}
Copy the code
Notice that when we use the chain of responsibility, we create the next chain of responsibility and use index+1 as the index of the next chain of responsibility. We then use index to pull an interceptor from the list of interceptors, call its intercept() method, and pass in the next chain of execution as an argument.
Thus, the next time an interceptor wants its next level to continue processing the request, it can call the proceed() method of the incoming chain of responsibility; If the next level does not need to continue processing after its own processing, then it can directly return a Response instance. Because you increment the current index by 1 each time, you can accurately fetch the next interceptor from the list for processing when you call proceed().
Note also that we mentioned the retry interceptor, which starts a while loop internally and calls the proceed() method of the chain of execution in the body of the loop to keep retrying the request. This is because the index of the interceptor chain at it is fixed, so every time proceed() is called, the chain is executed from its next level. Here’s how this chain of responsibility works:
Now that we know how OkHttp’s chain of interceptors works, let’s look at what logic each interceptor does.
2.3 retry and redirect: RetryAndFollowUpInterceptor
RetryAndFollowUpInterceptor is mainly used to when retry attempt failed, and in case of need to redirect. As we said above, the chain of responsibility calls the intercept() method of the first interceptor at the time of processing. If we are creating OkHttp when the client did not join the custom interceptors, then RetryAndFollowUpInterceptor is our duty to interceptor in the chain is called first.
@Override public Response intercept(Chain chain) throws IOException {
// ...
/ / note that here we initialize a StreamAllocation and assigned to global variables, it will be mentioned behind us
StreamAllocation streamAllocation = new StreamAllocation(client.connectionPool(),
createAddress(request.url()), call, eventListener, callStackTrace);
this.streamAllocation = streamAllocation;
// Record the number of redirects
int followUpCount = 0;
Response priorResponse = null;
while (true) {
if (canceled) {
streamAllocation.release();
throw new IOException("Canceled");
}
Response response;
boolean releaseConnection = true;
try {
// The current chain of responsibility is executed again, which is a retry logic
response = realChain.proceed(request, streamAllocation, null.null);
releaseConnection = false;
} catch (RouteException e) {
// Call the recover method to recover from the failure, returning true if it can be recovered, false otherwise
if(! recover(e.getLastConnectException(), streamAllocation,false, request)) {
throw e.getLastConnectException();
}
releaseConnection = false;
continue;
} catch (IOException e) {
// Try to connect to the server again
booleanrequestSendStarted = ! (einstanceof ConnectionShutdownException);
if(! recover(e, streamAllocation, requestSendStarted, request))throw e;
releaseConnection = false;
continue;
} finally {
// If releaseConnection is set to true, an exception occurs and resources need to be released
if (releaseConnection) {
streamAllocation.streamFailed(null); streamAllocation.release(); }}// Build a response using the previous response, priorResponse, which has an empty response body
if(priorResponse ! =null) {
response = response.newBuilder()
.priorResponse(priorResponse.newBuilder().body(null).build())
.build();
}
// Depending on the response, it may add some authentication information, redirect or process timeout requests
// Null will be returned if the request cannot be processed or if the error does not require further processing
Request followUp = followUpRequest(response, streamAllocation.route());
// Cannot redirect, return the previous response directly
if (followUp == null) {
if(! forWebSocket) { streamAllocation.release(); }return response;
}
// Close the resource
closeQuietly(response.body());
// If the maximum number of redirects is reached, an exception is thrown
if (++followUpCount > MAX_FOLLOW_UPS) {
streamAllocation.release();
throw new ProtocolException("Too many follow-up requests: " + followUpCount);
}
if (followUp.body() instanceof UnrepeatableRequestBody) {
streamAllocation.release();
throw new HttpRetryException("Cannot retry streamed HTTP body", response.code());
}
// This determines whether the new request can reuse the previous connection, and if not, creates a new connection
if(! sameConnection(response, followUp.url())) { streamAllocation.release(); streamAllocation =new StreamAllocation(client.connectionPool(),
createAddress(followUp.url()), call, eventListener, callStackTrace);
this.streamAllocation = streamAllocation;
} else if(streamAllocation.codec() ! =null) {
throw new IllegalStateException("Closing the body of " + response
+ " didn't close its backing stream. Bad interceptor?"); } request = followUp; priorResponse = response; }}Copy the code
The above code is used to do some processing based on the error information. It will determine whether the request can be redirected or whether a retry is necessary based on the information returned by the server. If it is worth retrying, the request will be retried in the next loop by creating or reusing the previous connection, otherwise the resulting request will be wrapped and returned to the user. Here, we mentioned the StreamAllocation object, which acts as a management class that maintains the relationship between server connections, concurrent streams, and requests. This class also initializes a Socket connection object that gets input/output stream objects. Also, notice that here we pass in a connectionPool object via client.connectionPool(). Here we just initialize these classes, but we don’t actually use them in the current method, instead passing them to the interceptor below to get the response to the request from the server. Later, we’ll explain what these classes are for and how they relate to each other.
2.4 BridgeInterceptor
The BridgeInterceptor is used to build a network request from a user’s request, then access the network using that request, and finally build the user response from the network response. The interceptor’s logic is relatively simple and is used to wrap the request and convert the server response into a user-friendly response:
public final class BridgeInterceptor implements Interceptor {
@Override public Response intercept(Chain chain) throws IOException {
Request userRequest = chain.request();
// Get the network request builder from the user request
Request.Builder requestBuilder = userRequest.newBuilder();
// ...
// Execute the network request
Response networkResponse = chain.proceed(requestBuilder.build());
// ...
// Get the user response builder from the network response
Response.Builder responseBuilder = networkResponse.newBuilder().request(userRequest);
// ...
// Return the user response
returnresponseBuilder.build(); }}Copy the code
2.5 Using a cache: CacheInterceptor
The cache interceptor determines whether there is a cache available based on the request information and the cached response information. If there is a cache available, it returns that cache to the user, otherwise it continues the chain of responsibility to get the response from the server. When the response is obtained, it is cached to disk. Here’s the logic of this part:
public final class CacheInterceptor implements Interceptor {
@Override public Response intercept(Chain chain) throws IOException { Response cacheCandidate = cache ! =null ? cache.get(chain.request()) : null;
long now = System.currentTimeMillis();
// Based on the information in the request and cached response, determine whether there is a cache available
CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
Request networkRequest = strategy.networkRequest; // Null if the request does not use a network
Response cacheResponse = strategy.cacheResponse; // Null if the request does not use the cache
if(cache ! =null) {
cache.trackResponse(strategy);
}
if(cacheCandidate ! =null && cacheResponse == null) {
closeQuietly(cacheCandidate.body());
}
// The request does not use the network and does not use the cache, so it is intercepted here and does not need to be handed over to the next level (network request interceptor)
if (networkRequest == null && cacheResponse == null) {
return new Response.Builder()
.request(chain.request())
.protocol(Protocol.HTTP_1_1)
.code(504)
.message("Unsatisfiable Request (only-if-cached)")
.body(Util.EMPTY_RESPONSE)
.sentRequestAtMillis(-1L)
.receivedResponseAtMillis(System.currentTimeMillis())
.build();
}
// The request uses the cache, but not the network: the result is taken from the cache, and there is no need to hand it off to the next level (network request interceptor) for execution
if (networkRequest == null) {
return cacheResponse.newBuilder().cacheResponse(stripBody(cacheResponse)).build();
}
Response networkResponse = null;
try {
// In this case, the execution chain is called, and the execution is actually handed over to the next level
networkResponse = chain.proceed(networkRequest);
} finally {
if (networkResponse == null&& cacheCandidate ! =null) { closeQuietly(cacheCandidate.body()); }}// This is called when the network request is received, and the next level of execution will be handed over to it. If the cache is used, the request results will be updated to the cache
if(cacheResponse ! =null) {
// The server returns 304 and returns the cached result
if (networkResponse.code() == HTTP_NOT_MODIFIED) {
Response response = cacheResponse.newBuilder()
.headers(combine(cacheResponse.headers(), networkResponse.headers()))
.sentRequestAtMillis(networkResponse.sentRequestAtMillis())
.receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();
networkResponse.body().close();
cache.trackConditionalCacheHit();
// Update the cache
cache.update(cacheResponse, response);
return response;
} else {
closeQuietly(cacheResponse.body());
}
}
Response response = networkResponse.newBuilder()
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();
// Put the result of the request into the cache
if(cache ! =null) {
if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
CacheRequest cacheRequest = cache.put(response);
return cacheWritingResponse(cacheRequest, response);
}
if (HttpMethod.invalidatesCache(networkRequest.method())) {
try {
cache.remove(networkRequest);
} catch (IOException ignored) {
// The cache cannot be written.}}}returnresponse; }}Copy the code
For caching, we use global cache, which is an InternalCache variable. InternalCache is an interface with only one implementation class Cache in OkHttp. Inside the Cache, DiskLruCache is used to save the cached data to disk. DiskLruCache and LruCache are two caching strategies commonly used on Android. The former is based on disk cache, the latter is based on memory cache, their core idea is Least Recently Used, that is, the Least Used algorithm. We’ll cover these two caching frameworks in more detail in future articles, so stay tuned.
In addition, the above two CacheStrategy fields are used to determine whether there is a cache available based on the information in the request and cached response. A lot of judgments are used to obtain these two fields, which involve knowledge about Http caching. If you are interested, you can refer to the source code.
2.6 Connection Reuse: ConnectInterceptor
The ConnectInterceptor is used to open a network connection to the specified server and hand it over to the next interceptor. Here we only have a network connection open, but no requests are sent to the server. The logic of fetching data from the server is handed over to the interceptor at the next level. While this isn’t really taking data from the web, it’s just opening a connection, but there’s a lot to be concerned about. This is because the ConnectionPool is used to reuse the connection when obtaining the connection object.
public final class ConnectInterceptor implements Interceptor {
@Override public Response intercept(Chain chain) throws IOException {
RealInterceptorChain realChain = (RealInterceptorChain) chain;
Request request = realChain.request();
StreamAllocation streamAllocation = realChain.streamAllocation();
booleandoExtensiveHealthChecks = ! request.method().equals("GET");
HttpCodec httpCodec = streamAllocation.newStream(client, chain, doExtensiveHealthChecks);
RealConnection connection = streamAllocation.connection();
returnrealChain.proceed(request, streamAllocation, httpCodec, connection); }}Copy the code
Here HttpCodec is used to encode the request and decode the response, and RealConnection is used to initiate a connection to the server. They are used in the next interceptor to get response information from the server. The logic of the next interceptor is not that complicated, it just needs to read data from the server once everything is ready. This is probably where the heart of OkHttp lies, so let’s take a good look at how connection reuse is implemented with connection pooling when the connection is created.
According to the above code, when we call streamAllocation’s newStream() method, we eventually go through a series of decisions to reach the findConnection() method in streamAllocation.
private RealConnection findConnection(int connectTimeout, int readTimeout, int writeTimeout,
int pingIntervalMillis, boolean connectionRetryEnabled) throws IOException {
// ...
synchronized (connectionPool) {
// ...
// Try to use an allocated connection, which may have been restricted from creating a new stream
releasedConnection = this.connection;
// Release resources for the current connection, and return a Socket to close the connection if the connection has been restricted from creating new streams
toClose = releaseIfNoNewStreams();
if (this.connection ! =null) {
// A connection has been assigned and is available
result = this.connection;
releasedConnection = null;
}
if(! reportedAcquired) {// Do not mark the reportedAcquired status if the connection has never been marked as acquired. ReportedAcquired is modified through the acquire() method
releasedConnection = null;
}
if (result == null) {
// Try to get a connection from the connection pool
Internal.instance.get(connectionPool, address, this.null);
if(connection ! =null) {
foundPooledConnection = true;
result = connection;
} else{ selectedRoute = route; }}}// Close the connection
closeQuietly(toClose);
if(releasedConnection ! =null) {
eventListener.connectionReleased(call, releasedConnection);
}
if (foundPooledConnection) {
eventListener.connectionAcquired(call, result);
}
if(result ! =null) {
// If a connection has been obtained from the connection pool, return it
return result;
}
boolean newRouteSelection = false;
if (selectedRoute == null && (routeSelection == null| |! routeSelection.hasNext())) { newRouteSelection =true;
routeSelection = routeSelector.next();
}
synchronized (connectionPool) {
if (canceled) throw new IOException("Canceled");
if (newRouteSelection) {
// Retrieve a link from the connection pool based on a list of IP addresses
List<Route> routes = routeSelection.getAll();
for (int i = 0, size = routes.size(); i < size; i++) {
Route route = routes.get(i);
// Get a connection from the connection pool
Internal.instance.get(connectionPool, address, this, route);
if(connection ! =null) {
foundPooledConnection = true;
result = connection;
this.route = route;
break; }}}if(! foundPooledConnection) {if (selectedRoute == null) {
selectedRoute = routeSelection.next();
}
// Create a new connection and assign it so that we can terminal before the handshake
route = selectedRoute;
refusedStreamCount = 0;
result = new RealConnection(connectionPool, selectedRoute);
acquire(result, false); }}// If we find a pool connection on the second attempt, we return it
if (foundPooledConnection) {
eventListener.connectionAcquired(call, result);
return result;
}
// Perform the TCP and TLS handshake
result.connect(connectTimeout, readTimeout, writeTimeout, pingIntervalMillis,
connectionRetryEnabled, call, eventListener);
routeDatabase().connected(result.route());
Socket socket = null;
synchronized (connectionPool) {
reportedAcquired = true;
// Put the connection into the connection pool
Internal.instance.put(connectionPool, result);
// If another multiplexed connection to the same address is created at the same time, release this connection and retrieve that one
if (result.isMultiplexed()) {
socket = Internal.instance.deduplicate(connectionPool, address, this);
result = connection;
}
}
closeQuietly(socket);
eventListener.connectionAcquired(call, result);
return result;
}
Copy the code
This method is placed in a loop that is called over and over again to get an available connection. It uses the current existing connection first, otherwise it uses the existing connection in the connection pool, and failing that, it creates a new connection. So, the above code is roughly divided into three parts:
- Determine whether the current connection is usable: whether the stream has been closed and has been restricted from creating new streams;
- If the current connection is not available, a connection is obtained from the connection pool;
- No available connections are found in the connection pool. Create a new connection, shake hands, and place it in the connection pool.
When obtaining a connection from the connection pool, the Internal get() method is used. Internal has a static instance that is initialized in the static code of OkHttpClient. We will call the connection pool’s get() method in the Internal get() to get a connection.
As you can see from the above code, one of the benefits of using connection reuse is that we actually do not have to do the TCP/TLS handshake. Since it takes time to establish a connection, the connection can be reused to improve the efficiency of our network access. How are these connections managed after they are placed in the connection pool? We’ll look at how these connections are managed in OkHttp’s ConnectionPool below.
2.7 CallServerInterceptor
The CallServerInterceptor is used to make a request to the server and obtain data. This is the last interceptor in the chain of responsibility. Instead of calling the handler of the chain of execution, the response is processed and returned directly to the interceptor of the previous level:
public final class CallServerInterceptor implements Interceptor {
@Override public Response intercept(Chain chain) throws IOException {
RealInterceptorChain realChain = (RealInterceptorChain) chain;
// Get the HttpCodec initialized in the ConnectInterceptor
HttpCodec httpCodec = realChain.httpStream();
/ / get RetryAndFollowUpInterceptor StreamAllocation initialization
StreamAllocation streamAllocation = realChain.streamAllocation();
// Get the RealConnection initialized in the ConnectInterceptor
RealConnection connection = (RealConnection) realChain.connection();
Request request = realChain.request();
long sentRequestMillis = System.currentTimeMillis();
realChain.eventListener().requestHeadersStart(realChain.call());
// Write the request header here
httpCodec.writeRequestHeaders(request);
realChain.eventListener().requestHeadersEnd(realChain.call(), request);
Response.Builder responseBuilder = null;
if(HttpMethod.permitsRequestBody(request.method()) && request.body() ! =null) {
if ("100-continue".equalsIgnoreCase(request.header("Expect"))) {
httpCodec.flushRequest();
realChain.eventListener().responseHeadersStart(realChain.call());
responseBuilder = httpCodec.readResponseHeaders(true);
}
// Write the request body here
if (responseBuilder == null) {
realChain.eventListener().requestBodyStart(realChain.call());
long contentLength = request.body().contentLength();
CountingSink requestBodyOut =
new CountingSink(httpCodec.createRequestBody(request, contentLength));
BufferedSink bufferedRequestBody = Okio.buffer(requestBodyOut);
// Write the request body
request.body().writeTo(bufferedRequestBody);
bufferedRequestBody.close();
realChain.eventListener()
.requestBodyEnd(realChain.call(), requestBodyOut.successfulCount);
} else if(! connection.isMultiplexed()) { streamAllocation.noNewStreams(); } } httpCodec.finishRequest();if (responseBuilder == null) {
realChain.eventListener().responseHeadersStart(realChain.call());
// Read the response header
responseBuilder = httpCodec.readResponseHeaders(false);
}
Response response = responseBuilder
.request(request)
.handshake(streamAllocation.connection().handshake())
.sentRequestAtMillis(sentRequestMillis)
.receivedResponseAtMillis(System.currentTimeMillis())
.build();
// Read the response body
int code = response.code();
if (code == 100) {
responseBuilder = httpCodec.readResponseHeaders(false);
response = responseBuilder
.request(request)
.handshake(streamAllocation.connection().handshake())
.sentRequestAtMillis(sentRequestMillis)
.receivedResponseAtMillis(System.currentTimeMillis())
.build();
code = response.code();
}
realChain.eventListener().responseHeadersEnd(realChain.call(), response);
if (forWebSocket && code == 101) {
response = response.newBuilder()
.body(Util.EMPTY_RESPONSE)
.build();
} else {
response = response.newBuilder()
.body(httpCodec.openResponseBody(response))
.build();
}
// ...
returnresponse; }}Copy the code
2.8 Connection Management: ConnectionPool
Similar to caching requests, OkHttp’s connection pool also uses a two-end queue to cache connections that have been created:
private final Deque<RealConnection> connections = new ArrayDeque<>();
Copy the code
OkHttp cache management is a two-step process. When we create a new connection, we put it into the cache. On the other side, we’re going to clean up the cache. In a ConnectionPool, when we cache a connection into the pool, we simply call the add() method of the dual-end queue to add it to the dual-end queue, and we leave the cleaning of the connection cache to the thread pool on a regular schedule.
There is a static thread pool in the ConnectionPool:
private static final Executor executor = new ThreadPoolExecutor(0 /* corePoolSize */,
Integer.MAX_VALUE /* maximumPoolSize */.60L /* keepAliveTime */,
TimeUnit.SECONDS,
new SynchronousQueue<Runnable>(),
Util.threadFactory("OkHttp ConnectionPool".true));
Copy the code
The following method is called whenever we insert a connection into the connection pool, and the above thread pool is called to perform the task of clearing the cache when the connection is inserted into the dual-end queue:
void put(RealConnection connection) {
assert (Thread.holdsLock(this));
if(! cleanupRunning) { cleanupRunning =true;
// Use the thread pool to perform the cleanup task
executor.execute(cleanupRunnable);
}
// Insert the new connection into the dual-end queue
connections.add(connection);
}
Copy the code
The cleanup task here is cleanupRunnable, which is an instance of type Runnable. It calls the cleanup() method inside the method to cleanup invalid connections:
private final Runnable cleanupRunnable = new Runnable() {
@Override public void run(a) {
while (true) {
long waitNanos = cleanup(System.nanoTime());
if (waitNanos == -1) return;
if (waitNanos > 0) {
long waitMillis = waitNanos / 1000000L;
waitNanos -= (waitMillis * 1000000L);
synchronized (ConnectionPool.this) {
try {
ConnectionPool.this.wait(waitMillis, (int) waitNanos);
} catch (InterruptedException ignored) {
}
}
}
}
}
};
Copy the code
Here is the cleanup() method:
long cleanup(long now) {
int inUseConnectionCount = 0;
int idleConnectionCount = 0;
RealConnection longestIdleConnection = null;
long longestIdleDurationNs = Long.MIN_VALUE;
synchronized (this) {
// Iterate over all connections
for (Iterator<RealConnection> i = connections.iterator(); i.hasNext(); ) {
RealConnection connection = i.next();
// The current connection is in use
if (pruneAndGetAllocationCount(connection, now) > 0) {
inUseConnectionCount++;
continue;
}
idleConnectionCount++;
// If a connection is found that can be cleared, it tries to find the connection that has been idle for the longest time to release
long idleDurationNs = now - connection.idleAtNanos;
if(idleDurationNs > longestIdleDurationNs) { longestIdleDurationNs = idleDurationNs; longestIdleConnection = connection; }}if (longestIdleDurationNs >= this.keepAliveDurationNs
|| idleConnectionCount > this.maxIdleConnections) {
// If the connection duration exceeds the maximum active duration or the number of idle connections exceeds the maximum allowed range, remove the connection directly
connections.remove(longestIdleConnection);
} else if (idleConnectionCount > 0) {
// If the number of idle connections is greater than 0, pause for the specified time (it will be cleaned up later, not now)
return keepAliveDurationNs - longestIdleDurationNs;
} else if (inUseConnectionCount > 0) {
// All connections are in use. Clean up after 5 minutes
return keepAliveDurationNs;
} else {
// No connection
cleanupRunning = false;
return -1;
}
}
closeQuietly(longestIdleConnection.socket());
return 0;
}
Copy the code
Two variables maxIdleConnections and keepAliveDurationNs are used when retrieving connections from the cache to determine whether they should be released, which indicate the maximum number of idle connections allowed and the maximum length of time a connection can survive. By default, the maximum number of idle connections is 5, and the maximum keepalive time is 5 minutes.
The above method iterates over the connections in the cache to find the one that has been idle for the longest time, and then determines whether the connection should be cleared based on parameters such as its idle duration and the maximum number of connections allowed. Also note that the above method returns a time, and if the connection that has been idle for the longest time still takes a while to clear, the time difference will be returned, and then the connection pool will be cleaned again after that time.
Conclusion:
This is our analysis of the source code for OkHttp Intranet access. When we make a request, we initialize an instance of a Call and Call its execute() and enqueue() methods, respectively, synchronously or asynchronously. Although, two methods a will be executed immediately in the current thread, a can be carried in the thread pool, but they are network access logic is the same: through the chain of responsibility of interceptor, ordinal after retry, bridge, caching, connection and access the server process, such as to get a response to the user. Caching and connectivity are the two most important parts of this, as the former involves some knowledge of computer networking, while the latter is at the heart of OkHttp’s efficiency and framework.