1. The background

We have a business that invokes an HTTP-based service provided by another department with tens of millions of calls per day. Httpclient is used to complete the business. Before, because QPS could not work, I took a look at the business code and made some optimization, which is recorded here.

Before and after comparison: before optimization, the average execution time is 250ms; After optimization, the average execution time is 80ms, reducing consumption by two-thirds, the container is no longer prone to alarm thread exhaustion, refreshing ~

2. The analysis

The original implementation of the project was rather rough, which was to initialize an HttpClient on each request, generate an httpPost object, execute, retrieve the entity from the return result, save it as a string, and explicitly close the Response and client. We analyze and optimize bit by bit:

2.1 HttpClient iterative creation overhead

Httpclient is a thread-safe class. It does not need to be created by each thread each time it is used.

2.2 Cost of Repeatedly Creating TCP Connections

TCP’s three-way handshake and four-way wave are too costly for high frequency requests. Imagine if we spend 5ms on negotiation per request, then for a single system with QPS of 100, we spend 500ms on handshake and wave per second. Since we are not senior leaders, we programmers should stop making such a big fuss and change to keep Alive mode to implement connection reuse!

2.3 Duplicate Entity Cache Cost

In the original logic, the following code was used:

HttpEntity entity = httpResponse.getEntity();
String response = EntityUtils.toString(entity);
Copy the code

Here we are copying an extra copy of content into a string, while the original httpResponse still retains one copy of content and needs to be consumed, which consumes a lot of memory in the case of high concurrency and very large content.

3. The implementation

According to the above analysis, we mainly need to do three things: the first is the singleton client, the second is the cache alive connection, and the third is to better handle the return result. Let’s stop talking about one. Let’s talk about two.

When it comes to connection caching, it’s easy to think of database connection pooling. Httpclient4 provides a PoolingHttpClientConnectionManager as the connection pool. Next we optimize by following the steps:

3.1 Define a Keep Alive strategy

As for keep-Alive, this article will not elaborate on it, except to mention that whether to use keep-Alive depends on the business situation and it is not a panacea. Also, there are plenty of stories between keep-alive and time_wait/close_wait.

In this business scenario, we have a small number of fixed clients that access the server for a long time with extremely high frequency. Enabling Keep-Alive is very appropriate

Again, HTTP keep-alive and TCP KEEPALIVE are not the same thing. Returning to the body, define a strategy as follows:

ConnectionKeepAliveStrategy myStrategy = new ConnectionKeepAliveStrategy() { @Override public long getKeepAliveDuration(HttpResponse response, HttpContext context) { HeaderElementIterator it = new BasicHeaderElementIterator (response.headerIterator(HTTP.CONN_KEEP_ALIVE)); while (it.hasNext()) { HeaderElement he = it.nextElement(); String param = he.getName(); String value = he.getValue(); if (value ! = null && param.equalsIgnoreCase ("timeout")) { return Long.parseLong(value) * 1000; } } return 60 * 1000; // If there is no convention, the default definition of duration is 60s}};Copy the code

A PoolingHttpClientConnectionManager 3.2 configuration

PoolingHttpClientConnectionManager connectionManager = new PoolingHttpClientConnectionManager(); connectionManager.setMaxTotal(500); connectionManager.setDefaultMaxPerRoute(50); For example, by default, each route has a maximum of 50 concurrent requestsCopy the code

You can also set the number of concurrent requests per route.

3.3 generate httpclient

httpClient = HttpClients.custom()
                .setConnectionManager(connectionManager)
                .setKeepAliveStrategy(kaStrategy)
                .setDefaultRequestConfig(RequestConfig.custom().setStaleConnectionCheckEnabled(true).build())
                .build();
Copy the code

Note: setStaleConnectionCheckEnabled method is used to drive out have been closed link not recommended. A better approach is to manually enable a thread and run the CloseexpredConnections and closeIdleConnections methods on a regular basis, as shown below.

public static class IdleConnectionMonitorThread extends Thread { private final HttpClientConnectionManager connMgr; private volatile boolean shutdown; public IdleConnectionMonitorThread(HttpClientConnectionManager connMgr) { super(); this.connMgr = connMgr; } @Override public void run() { try { while (! shutdown) { synchronized (this) { wait(5000); // Close expired connections connMgr.closeExpiredConnections(); // Optionally, close connections // that have been idle longer than 30 sec connMgr.closeIdleConnections(30, TimeUnit.SECONDS); } } } catch (InterruptedException ex) { // terminate } } public void shutdown() { shutdown = true; synchronized (this) { notifyAll(); }}}Copy the code

3.4 Reduce overhead when using HttpClient to execute method

The caveat here is that you do not close connection.

One possible way to get content is similar to making a copy of the contents of an Entity:

res = EntityUtils.toString(response.getEntity(),"UTF-8");
EntityUtils.consume(response1.getEntity());
Copy the code

However, the more recommended approach is to define a ResponseHandler so that you and I don’t catch exceptions and close the stream ourselves. Here we can take a look at the relevant source:

public <T> T execute(final HttpHost target, final HttpRequest request, final ResponseHandler<? extends T> responseHandler, final HttpContext context) throws IOException, ClientProtocolException { Args.notNull(responseHandler, "Response handler"); final HttpResponse response = execute(target, request, context); final T result; try { result = responseHandler.handleResponse(response); } catch (final Exception t) { final HttpEntity entity = response.getEntity(); try { EntityUtils.consume(entity); } catch (final Exception t2) { // Log this exception. The original exception is more // important and will be thrown to the caller. this.log.warn("Error consuming content after an exception.", t2); } if (t instanceof RuntimeException) { throw (RuntimeException) t; } if (t instanceof IOException) { throw (IOException) t; } throw new UndeclaredThrowableException(t); } // Handling the response was successful. Ensure that the content has // been fully consumed. final HttpEntity entity =  response.getEntity(); EntityUtils.consume(entity); // return result; }Copy the code

As you can see, if we execute the execute method with resultHandler, we end up automatically calling the consume method, which looks like this:

public static void consume(final HttpEntity entity) throws IOException { if (entity == null) { return; } if (entity.isStreaming()) { final InputStream instream = entity.getContent(); if (instream ! = null) { instream.close(); }}}Copy the code

You can see that it finally closes the input stream.

4. Other

Here are some additional configurations and reminders:

4.1 Some Timeout configurations for HttpClient

CONNECTION_TIMEOUT is the connection timeout time, SO_TIMEOUT is the socket timeout time, the two are different. The connection timeout is the waiting time before the request is initiated. The socket timeout period is the timeout period for waiting data.

HttpParams params = new BasicHttpParams(); Integer CONNECTION_TIMEOUT = 2 * 1000; Integer SO_TIMEOUT = 2 * 1000; // Set the request timeout to 2 seconds. // Set the wait data timeout to 2 seconds depending on the business // Define the millisecond timeout to use when retrieving a ManagedClientConnection instance from ClientConnectionManager // This parameter expects a java.lang.Long value. If this parameter is not set, it defaults to CONNECTION_TIMEOUT, so it must be set. Long CONN_MANAGER_TIMEOUT = 500L; // In httpClient4.2.3 I remember that it was changed to an object so that using long directly would cause an error, Then change back params. SetIntParameter (CoreConnectionPNames CONNECTION_TIMEOUT, CONNECTION_TIMEOUT); params.setIntParameter(CoreConnectionPNames.SO_TIMEOUT, SO_TIMEOUT); params.setLongParameter(ClientPNames.CONN_MANAGER_TIMEOUT, CONN_MANAGER_TIMEOUT); / / before submitting the request test connection availability params. SetBooleanParameter (CoreConnectionPNames STALE_CONNECTION_CHECK, true); // Set the number of HTTP client retries. The default value is 3. The current is disabled (if the project is less than the default) httpClient. SetHttpRequestRetryHandler (new DefaultHttpRequestRetryHandler (0, false));Copy the code

4.2 If nginx is configured, nginx should also set keep-alive for both ends

In today’s business, the absence of Nginx is rare. By default, nginx opens long links with clients and uses short links with servers. Note the keepalive_timeout and Keepalive_requests parameters on the client side and the Keepalive parameter Settings on the upstream side.