preface
A few months ago, I followed the source code of OkHttp process, but after a long time, now I can recall only a few interceptors, so I didn’t gain much. So, think about it, what can I learn from OkHttp
doubt
- How does OkHttp break down functionality, roughly how many modules
- The chain of responsibility model it uses, and what scenarios are appropriate for actual development
- How does it use thread pools, and what are the benefits of doing so
How does OkHttp break down functionality, roughly how many modules
As a network framework, the core function is to initiate a request and process the response. These two are functional parts. OkHttp uses Dispatcher to perform tasks, and there is a high concurrency thread pool
The invocation flow that requests execution
- Recall.enqueue(Callback)
- client.dispatcher().enqueue(new AsyncCall(responseCallback))
Pause and follow the enqueue method of the dispenser. How does it handle asynchronous requests internally
Void enQueue (AsyncCall call) {synchronized (this) {readyAsynccalls.add (call); } // The important code to execute promoteAndExecute(); } private boolean promoteAndExecute() { assert (! Thread.holdsLock(this)); ExecutableCalls = new ArrayList<>(); executableCalls = new ArrayList<>(); boolean isRunning; synchronized (this) { for (Iterator<AsyncCall> i = readyAsyncCalls.iterator(); i.hasNext(); ) { AsyncCall asyncCall = i.next(); if (runningAsyncCalls.size() >= maxRequests) break; // Max capacity. if (runningCallsForHost(asyncCall) >= maxRequestsPerHost) continue; // Host max capacity. i.remove(); ExecutableCalls. Add (asyncCall); // executableCalls. runningAsyncCalls.add(asyncCall); } isRunning = runningCallsCount() > 0; For (int I = 0, size = executableCalls. Size (); i < size; i++) { AsyncCall asyncCall = executableCalls.get(i); asyncCall.executeOn(executorService()); } return isRunning; }Copy the code
First AsyncCall is a class that implements Runnable, which is not surprising, since it is executed in a thread pool
The invocation process executed in response
After watching the request queue, after filtering, start traversing and put it into the thread pool to execute, and then wait for the response
The exact code is in the Run () method of Recall, which uses the chain of responsive-interceptor pattern we’ve seen before
Why use ArrayDeque, a two-ended queue?
OkHttp source code series ArrayDeque – two – endian queue
First, the Jdk provides two main dual-endian queues:
- The LinkedList implementation of a two-ended queue
- ArrayDeque is a two-ended queue that implements a cyclic array
So why ArrayDeque? It’s efficient. The Jdk notes that ArrayDeque will be faster than LinkedList when used as a queue
ArrayQueue is thread unsafe. How does Okhttp guarantee synchronization? The synchronized keyword
How does OkHttp use the chain of responsibility mode? Is there any other place in the Android source code that uses it?
First of all, what is the chain of responsibility model, and how to use the code to achieve
I checked a lot of documents on the Internet, and I found a very vivid expression. Xiao Zhang came back from a business trip, and 2W had to hire a company for reimbursement, so he went to find the team leader
- The group leader sees invoice, face value exceeds authority, say to let xiao Zhang seek supervisor
- Director a look, their biggest can only sign 3k, let its to find the manager
- The manager can only approve 10 million at most and let Xiao Zhang go to the boss
- The final boss signature processing
The whole process involves multiple classes (group leader, supervisor, manager, etc.), level by level, and finally the process is finished
The event distribution of View in Android source code also uses the responsibility chain mode, in which the distributed MotionEvent is distributed through the layer of ViewGroup, and is eventually consumed or returned to the top View
Android Event distribution and Responsibility chain mode
- So how does OkHttp implement the chain of responsibility?
Let is see fucking code ,Woohooo
@override protected void execute() {Boolean signalledCallback = false; transmitter.timeoutEnter(); Try {/ / use the chain of responsibility processing request and Response, the key see here the Response the Response = getResponseWithInterceptorChain (); . }} Response getResponseWithInterceptorChain () throws IOException {/ / create a blocker queue List < Interceptor > interceptors = new ArrayList<>(); interceptors.addAll(client.interceptors()); interceptors.add(new RetryAndFollowUpInterceptor(client)); interceptors.add(new BridgeInterceptor(client.cookieJar())); interceptors.add(new CacheInterceptor(client.internalCache())); interceptors.add(new ConnectInterceptor(client)); if (! forWebSocket) { interceptors.addAll(client.networkInterceptors()); } interceptors.add(new CallServerInterceptor(forWebSocket)); // This is done, Chain = new RealInterceptorChain(Interceptors, transmitter, null, 0, originalRequest, this, client.connectTimeoutMillis(), client.readTimeoutMillis(), client.writeTimeoutMillis()); boolean calledNoMoreExchanges = false; Response Response = chain.proceed(originalRequest); if (transmitter.isCanceled()) { closeQuietly(response); throw new IOException("Canceled"); } return response; } catch (IOException e) { calledNoMoreExchanges = true; throw transmitter.noMoreExchanges(e); } finally { if (! calledNoMoreExchanges) { transmitter.noMoreExchanges(null); }}}Copy the code
This method uses a queue to store all interceptors, which are then loaded into the RealInterceptorChain object. A call to proceed returns a result. How does RealInterceptorChain handle this
public Response proceed(Request request, Transmitter transmitter, @Nullable Exchange exchange) throws IOException { ... // When I see next, I remember next. What is next? RealInterceptorChain next = new RealInterceptorChain(interceptors, transmitter, exchange, index + 1, request, call, connectTimeout, readTimeout, writeTimeout); // Interceptor Interceptor = interceptors.get(index); Response Response = interceptor.intercept(next); // Confirm that the next interceptor made its required call to chain.proceed() ... Return response; }Copy the code
Proceed = 0; proceed = 0; proceed = 0
When interceptor. Intercept (Next) comes in, everything stops. With curiosity, I open the CacheInterceptor implementation class of the Interceptor. The next sight of Proceed finally solved the case
@Override public Response intercept(Chain chain) throws IOException { ... NetworkResponse = chain.proceed(networkRequest); networkResponse = chain.proceed(networkRequest); . Return response;Copy the code
How does OkHttp use thread pools
Last question, how to use the thread pool, first of all, the meaning of the various parameters of the thread pool, and the working principle of the thread pool
ExecutorService () {if (ExecutorService == null) {ExecutorService = new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60, TimeUnit.SECONDS, new SynchronousQueue<>(), Util.threadFactory("OkHttp Dispatcher", false)); } return executorService; // corePoolSize -- The number of threads to keep in the pool, even if they are idle, Unless allowCoreThreadTimeOut is set // maximumPoolSize -- The maximum number of threads to allow in the pool KeepAliveTime -- when the number of threads is greater than the core, This is the maximum time that excess idle threads will wait for new tasks before terminating. // Unit -- The time unit For the keepAliveTime argument // workQueue -- The queue to use for holding tasks before they are executed Will hold only the Runnable tasks submitted by the execute method. // threadFactory -- The factory to use when the executor creates a new thread public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue, ThreadFactory threadFactory) { this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, threadFactory, defaultHandler); }Copy the code
Create parameters:
- The number of core threads, the core threads will remain alive even if they are idle, so this is 0
- The maximum number of threads allowed in the pool, and this is the maximum number of ints, actually can’t go that far, OkHttp has a limit of 64 connections
- The wait time can be understood as the timeout time for a non-core thread to wait for a task, which is 60 seconds
- The unit of waiting time
- The work queue, in this case, is a synchronous blocking queue, with no containers inside, and incoming to one will block incoming to the next
- Thread factory
- Why does OkHttp set up thread pools this way, and what are the benefits?
. In fact this parameter is set, be Excutor newCachedThreadPool (), first of all a blocking synchronization queue, no container, what does it mean, whenever a network request, as long as the core thread is full, will create a new thread in the pool
If the number of threads in the thread pool is greater than the number of core threads and the queue is full, and the number of threads is less than the maximum number of threads, then new threads will be created, just as Okhttp’s maximum number of threads is a maximum, so it will keep creating threads, a high concurrency thread pool