Introduction to the

In fact, the most profitable software industry is not to write code, write code can only call Malone, senior point called programmers, are hard work. Are there any fancy jobs? They have to. They’re called consultants.

Consultants are there to help enterprises make plans, structures and optimizations. Sometimes a simple code change or an adjustment to the architecture can make the software or process run more efficiently, thus saving hundreds of millions of costs for enterprises.

Today, in addition to introducing how to support HTTP and HTTPS protocols in Netty at the same time, we also introduce a value of hundreds of millions of website data optimization program, with this program, the annual salary of millions is not a dream!

Objective of this paper

This article will show you how to support both HTTP and HTTP2 in a Netty service, which provides multiple image access support. We will show you how to return multiple images from the server. Finally, introduce a speed optimization program worth hundreds of millions of dollars, we will benefit a lot.

Support for multiple picture services

On the server side, the service is started using ServerBootstrap, which has a group method that specifies the acceptor group and client group.

    public ServerBootstrap group(EventLoopGroup group) 
    public ServerBootstrap group(EventLoopGroup parentGroup, EventLoopGroup childGroup) 
Copy the code

Of course, you can specify two different groups, or you can specify the same group, it provides two group methods, the effect is not much difference.

Here we now create an EventLoopGroup in the main server and pass it to ImageHttp1Server and ImageHttp2Server. Then call the group method on each server and configure the handler.

Let’s take a look at the ImageHttp1Server construct:

ServerBootstrap b = new ServerBootstrap(); b.option(ChannelOption.SO_BACKLOG, 1024); b.group(group).channel(NioServerSocketChannel.class).handler(new LoggingHandler(LogLevel.INFO)) .childHandler(new ChannelInitializer<SocketChannel>() { @Override protected void initChannel(SocketChannel ch){ ch.pipeline().addLast(new HttpRequestDecoder(), new HttpResponseEncoder(), new HttpObjectAggregator(MAX_CONTENT_LENGTH), new Http1RequestHandler()); }});Copy the code

We passed in HttpRequestDecoder, HttpResponseEncoder, and HttpObjectAggregator from netty. There is also a custom Http1RequestHandler.

Take a look at the ImageHttp2Server construction again:

ServerBootstrap b = new ServerBootstrap(); b.option(ChannelOption.SO_BACKLOG, 1024); b.group(group).channel(NioServerSocketChannel.class).childHandler(new ChannelInitializer<SocketChannel>() { @Override protected void initChannel(SocketChannel ch) { ch.pipeline().addLast(sslCtx.newHandler(ch.alloc()), new CustProtocolNegotiationHandler()); }});Copy the code

For simplicity, the default is to use the HTTP1 service if accessing from HTTP and the http2 service if accessing from HTTPS.

So in http2 service, we only need the custom ProtocolNegotiationHandler can, and don’t have to deal with the clear text upgrade request.

Http2 processor

In TLS environment, we through the custom CustProtocolNegotiationHandler, inherited from ApplicationProtocolNegotiationHandler for client and server protocol interaction.

For the HTTP2 protocol, Using the netty bring InboundHttp2ToHttpAdapterBuilder and HttpToHttp2ConnectionHandlerBuilder http2 frame into http1 FullHttpRequest object. This allows us to process http1 messages directly.

The conversion process is as follows:

DefaultHttp2Connection connection = new DefaultHttp2Connection(true);
        InboundHttp2ToHttpAdapter listener = new InboundHttp2ToHttpAdapterBuilder(connection)
                .propagateSettings(true).validateHttpHeaders(false)
                .maxContentLength(MAX_CONTENT_LENGTH).build();

        ctx.pipeline().addLast(new HttpToHttp2ConnectionHandlerBuilder()
                .frameListener(listener)
                .connection(connection).build());

        ctx.pipeline().addLast(new Http2RequestHandler());
Copy the code

The only difference between the transformed HTTP2 handler and the normal HTTP1 handler is that you need to set an additional streamId attribute to the request and response headers.

And there is no need to deal with http1-specific 100-continue and KeepAlive. The rest is no different than the HTTP1 handler.

Process pages and images

Since we use a converter to convert an HTTP2 frame into an HTTP1 plain object, there is not much difference between the request for the corresponding page and image and http1 processing.

For pages, we need to get the HTML to return and set CONTENT_TYPE to “text/ HTML; Charset = utf-8 “;

    private void handlePage(ChannelHandlerContext ctx, String streamId,  FullHttpRequest request) throws IOException {
        ByteBuf content =ImagePage.getContent();
        FullHttpResponse response = new DefaultFullHttpResponse(HTTP_1_1, OK, content);
        response.headers().set(CONTENT_TYPE, "text/html; charset=UTF-8");
        sendResponse(ctx, streamId, response, request);
    }
Copy the code

For images, we get the image we want to return, convert it to ByteBuf, set CONTENT_TYPE to “image/jpeg”, and return:

    private void handleImage(String id, ChannelHandlerContext ctx, String streamId,
            FullHttpRequest request) {
        ByteBuf image = ImagePage.getImage(parseInt(id));
        FullHttpResponse response = new DefaultFullHttpResponse(HTTP_1_1, OK, image);
        response.headers().set(CONTENT_TYPE, "image/jpeg");
        sendResponse(ctx, streamId, response, request);
    }
Copy the code

This allows us to process both page and image requests on the Netty server side.

Hundreds of millions of dollars worth of speed optimization

Finally, it’s time for the most exciting part of this article. What is the speed optimization scheme worth billions of dollars?

Before we talk about this plan, let me tell you a story about flood fighting and rescue. There are two counties that live next to a big river. The river is very unstable and floods frequently, but the governors of the two counties do things very differently.

The county head of COUNTY A was serious and responsible. He sent people to regularly patrol and inspect the section of the river he was in. He built dikes, planted trees and went on patrol.

The head of COUNTY B never inspected the flood. When a river flooded, the head of county B organized people to fight the flood and rescue the flood. Then the media all reported the great achievements of the head of county B in fighting the flood.

Ok, that’s the story, so let’s go to our optimization. Whether the user requests a page or an image, the ctx.writeAndFlush(response) method is eventually called to write back the response.

If you put it into a scheduled task, to execute it regularly, as follows:

ctx.executor().schedule(new Runnable() {
            @Override
            public void run() {
                ctx.writeAndFlush(response);
            }
        }, latency, TimeUnit.MILLISECONDS);
Copy the code

The server does not send an latency response until milliseconds have elapsed. For example, here we set latency to 5 seconds.

Of course, 5 seconds is not satisfactory, so the leader or customer comes to you and asks you to optimize it. You said the performance problem was difficult, involving Maxwell’s equations and the third law of thermodynamics, and would take a month. The leader says, roll up your sleeves and work hard, I’ll give you a 50% raise next month.

A month later, you change latency to 2.5 seconds and get a 100% improvement in performance. How many billions is this optimization worth?

conclusion

Of course, I was just kidding in the last section, but using timed tasks in Netty responses should also be mastered, for reasons you know!

Learn -netty4 for an example of this article

This article is available at www.flydean.com/34-netty-mu…

The most popular interpretation, the most profound dry goods, the most concise tutorial, many tips you didn’t know waiting for you to discover!

Welcome to pay attention to my public number: “procedures those things”, understand technology, more understand you!