5.1 Basic introduction to the threading model
-
Different thread mode, has a great impact on the performance of the program, in order to understand the Netty thread mode, we will systematically explain each thread mode, and finally look at the Netty thread model has what advantages.
-
The existing threading models are: traditional blocking I/O service Model Reactor schema
-
According to the number of reactors and the number of threads in the resource pool, there are three typical implementations of single Reactor single thread. Single Reactor multithreading; Reactor is multithreaded
-
Netty threading model (Netty mainly made some improvements based on the master/slave Reactor multithreading model, which has multiple reactors)
5.2 Traditional blocking I/O service model
5.2.1 Working principle diagram
5.2.2 Model features
-
Use blocking IO mode to get input data
-
Each connection requires a separate thread for data input, business processing, and data return
5.2.3 Problem analysis
1) When the number of concurrent requests is large, a large number of threads will be created, occupying a large amount of system resources
After the connection is created, if the current thread has no data to read temporarily, the thread will block in the read operation, resulting in a waste of thread resources
5.3 Reactor model
5.3.1 Solutions for the two disadvantages of tradition
-
Based on the I/O multiplexing model: multiple connections share a blocking object, and applications only need to wait on one blocking object, instead of blocking for all connections. When a connection has new data to process, the operating system notifies the application, and the thread returns from the blocked state to process the Reactor. 2. Dispatcher mode 3. Notifier Mode
-
Thread pool-based reuse of thread resources: Instead of creating threads for each connection and assigning business processing tasks to threads after the connection is completed, one thread can process business for multiple connections
5.3.2 Basic design idea of Reactor model
Reuse and thread pool are the basic design ideas of Reactor model
As shown in figure
-
Reactor pattern, a pattern that passes one or more inputs simultaneously to the service processor (event-driven)
-
The Reactor pattern is also called the Dispatcher pattern because a server-side program processes incoming requests and dispatches them synchronously to the appropriate processing thread
-
The Reactor model uses IO to reuse listening events and distribute them to a thread (process) after receiving the events, which is the key to high concurrency processing of network servers
5.3.3 Core components in the Reactor model:
-
Reactor: THE Reactor runs in a separate thread that listens for and distributes events to the appropriate handlers to react to IO events. It is like a corporate telephone operator, which takes calls from customers and redirects the line to the appropriate contact;
-
Handlers: The actual event to which the I/O event is executed by the program, similar to a staff member in the company with whom the customer wants to talk. Reactor responds to I/O events by scheduling appropriate handlers that perform non-blocking actions.
5.3.4Reactor Model Classification:
There are three typical implementations, depending on the number of reactors and the number of resource pool threads that are processed
-
Single Reactor Single thread
-
Single Reactor multithreading
-
Reactor is multithreaded
5.4 Single Reactor Single Thread
Schematic diagram
5.4.1 Solution Description
-
Select is a standard network programming API introduced in the I/O multiplexing model earlier that enables applications to listen for multiple connection requests through a blocking object
-
The Reactor object monitors client request events through Select and distributes events through Dispatch
-
In the case of a connection request event, the Acceptor processes the connection request through Accept, and then creates a Handler object to handle subsequent business processing after the connection is completed
-
If the connection is not established, the Reactor dispatches the Handler that calls the connection in response
-
The Handler completes the complete business process of Read→ business process →Send
Combining examples: the server side with a thread through multiplexing all IO operations (including connection, read, write, etc.), simple coding, clear, but if the number of client connections, will not be able to support, the previous NIO case belongs to this model.
5.4.2 Analysis of advantages and disadvantages of the scheme
-
Advantages: simple model, no multithreading, process communication, competition, all in a thread to complete
-
Disadvantages: Performance issues, only one thread, can not fully play the performance of multi-core CPU. When a Handler processes services on a connection, the entire process cannot process other connection events, resulting in performance bottlenecks
-
Disadvantages: Reliability problems, unexpected thread termination, or into an infinite loop, the entire system communication module is unavailable, can not receive and process external messages, resulting in node failure
-
Usage scenario: The number of clients is limited and business processing is very fast, such as the time complexity O(1) of Redis business processing
5.5 Single-reactor Multithreading
5.5.1 schematic diagram
5.5.2 Summary of the figure above
-
The Reactor object monitors client request events through SELECT. Upon receiving events, the Reactor object distributes them through Dispatch
-
If a connection request is established, the right Acceptor processes the connection request with Accept, and then creates a Handler object to handle the various events after the connection is completed
-
If it is not a connection request, it is handled by the reactor distribution call handler corresponding to the connection
-
Handler is only responsible for responding to events, not specific business processing. After reading data through read, it is distributed to a thread in the following worker thread pool to process the business
-
The worker thread pool allocates independent threads to complete the real business and returns the results to the handler
-
After receiving the response, the Handler sends the result to the client
5.5.3 Advantages and disadvantages of the scheme:
-
Advantages: It makes full use of the processing power of the multi-core CPU
-
Disadvantages: Multithreaded data sharing and access is complex, reactor processes all the event monitoring and response, running in a single thread, prone to performance bottlenecks in high concurrency scenarios.
5.6 Principal/Slave Reactor Multithreading
5.6.1 Working principle diagram
In the single-reactor multithreading model, Reactor runs in a single thread, which is easy to become a performance bottleneck in high concurrency scenarios. Therefore, THE Reactor can be run in multithreading
5.6.2 Scheme description in the figure above
-
Reactor Main Thread The MainReactor listens for connection events through select and processes connection events through acceptors
-
When an Acceptor processes a connection event, the MainReactor assigns the connection to the SubReactor
-
The Subreactor adds connections to the connection queue for listening and creates handlers for various events
-
When a new event occurs, the Subreactor calls the corresponding handler
-
Handler reads the data via read and distributes it to subsequent worker threads for processing
-
The worker thread pool allocates separate worker threads for business processing and returns results
5.6.4 Advantages and disadvantages of the solution
-
Advantages: The responsibilities of the parent thread and child thread are simple and clear. The parent thread only needs to receive new connections, and the child thread completes the subsequent service processing.
-
Advantages: Simple data interaction between parent thread and child thread, Reactor main thread only needs to pass new connection to child thread, child thread does not need to return data.
-
Disadvantages: High programming complexity
-
Combined examples: This model is widely used in many projects, including support for the Nginx Master-slave Reactor, Memcached master-slave, and Netty master-slave models
5.7 Reactor Model summary
5.7.13 modes are understood by life cases
-
Single Reactor single thread, receptionist and waitress are the same person, the whole process for customer service
-
Single Reactor multithreading, 1 receptionist, multiple waiters, the receptionist is only responsible for reception
-
Reactor multithreaded, multiple receptionists, multiple servers
5.7.2 The Reactor model has the following advantages
-
Fast response and not blocked by a single synchronization time, although the Reactor itself is still synchronous
-
It minimizes complex multithreading and synchronization issues and avoids multithreading/process switching overhead
-
It has good scalability and can make full use of CPU resources by increasing the number of Reactor instances
-
The Reactor model itself is independent of the specific event processing logic and has high reusability
5.8 Netty model
5.8.1 Schematic diagram of Working Principle 1- Simple version
Netty mainly improves the multithreaded model of Master/slave Reactors (see figure) to some extent. There are multiple Reactors in the multithreaded model of master/slave Reactors
5.8.2 Description of the figure above
-
The client initiates a connection request to the server
-
The BossGroup thread maintains the Selector, cares only about the Accecpt event, and cares nothing else. When it receives the Accept event, it gets the corresponding SocketChannel, generating a SocketChannel object
-
SocketChannel is encapsulated as a NIOScoketChannel for later use
-
Register the NIOScoketChannel channel with the Worker thread (event loop) and maintain it
-
When the Worker thread listens to the selector channel for an event that it is interested in, it processes it (by handler), noting that the handler has been added to the channel
5.8.3 Schematic diagram of working principle 2- Advanced version
-
Abstract handles two types of thread pools, BossGroup and WorkGroup
-
Each Group has a NIOEventLoop that is constantly listening for threads registered with the selector
-
BossGroup receives the thread and dispatches it to the WorkGroup
5.8.4 Schematic diagram of Working Principle 3- Final version
5.8.5 Summary of the illustration above
-
Netty abstracts two groups of thread pools BossGroup for receiving client connections and WorkerGroup for reading and writing network connections
-
Both BossGroup and WorkerGroup types are NioEventLoopGroup
-
A NioEventLoopGroup is equivalent to a group of event loops containing multiple event loops, each of which is a NioEventLoop
-
A NioEventLoop represents a thread that executes processing tasks in a continuous loop. Each NioEventLoop has a selector that listens for network traffic from the socket tied to it
-
A NioEventLoopGroup can have multiple threads, that is, it can have multiple NioEventLoops
-
Each Boss NioEventLoop executes three steps
- Poll the Accept event
- Process the Accept event, establish a connection with the client, generate a NioScocketChannel, and register it with a selector on a worker NIOEventLoop
- Tasks that process the task queue, namely runAllTasks
- Each Worker NIOEventLoop executes a step
- Poll read, write events
- Handle I/O events, that is, read and write events, in the corresponding NioScocketChannel processing
- Tasks that process the task queue, namely runAllTasks
- Pipeline is used for each Worker NIOEventLoop to process business. The pipeline contains channels, that is, the corresponding channels can be obtained through the pipeline, and many processors are maintained in the pipeline
5.8.6Netty QuickStart Instance -TCP Service
-
The Netty server listens on port 6668, and the client can send a message to the server “Hello, server ~”
-
The server can reply to the client with a message “Hello, client ~”
NettyServer
import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.nio.NioEventLoop;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
public class NettyServer {
public static void main(String[] args) throws InterruptedException {
// Create BossGroup and WorkerGroup
/ / that
Create two thread groups: bossGroup and workerGroup
// 2. BossGroup only handles connection requests. Real and client business processing is handed over to workerGroup
// 3. Both are infinite loops
// 4. Number of nioeventloops in bossGroup and workerGroup
// The number of actual CPU cores is 2 by default
NioEventLoopGroup bossGroup = new NioEventLoopGroup(1);
NioEventLoopGroup workGroup = new NioEventLoopGroup();
try
{
// Set the parameters when creating the server
// A bootstrap class is used to set parameters
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(bossGroup,workGroup) // There are two workgroups on the Netty server
.channel(NioServerSocketChannel.class) // The current channel uses NioServerSocketChanel as the server implementation
.option(ChannelOption.SO_BACKLOG,128)// Set the number of connections to the thread queue
.childOption(ChannelOption.SO_KEEPALIVE,true) // Set the connection to remain active
.childHandler(new ChannelInitializer<SocketChannel>() { // Create a channel initialization object
// Set the handler to pipeline
@Override
protected void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(newNettyServerHandler()); }}); System.out.println("Server is ready...");
// Bind a port and synchronize, generating a ChannelFuture object to start the server
ChannelFuture cf = bootstrap.bind(6668).sync();
// Listen for closed channels
cf.channel().closeFuture().sync();
}finally{ bossGroup.shutdownGracefully(); workGroup.shutdownGracefully(); }}}Copy the code
NettyServerHandler
/** * Define a channel inbound adapter */
public class NettyServerHandler extends ChannelInboundHandlerAdapter {
// Reads the message sent from the client and calls this method as soon as there is a message
ChannelHandlerContext The ChannelHandlerContext object, which contains pipes, addresses, and channels
// Channel: is the data transfer between the server and the client, pipeline: is the Handler chain collection of business logic
// MSG is the message sent by the client
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
System.out.println("Server read thread:"+Thread.currentThread().getName());
System.out.println("server ctx="+ctx);
Channel channel = ctx.channel();
ChannelPipeline pipeline = ctx.pipeline(); // It is a two-way link, outbound, inbound
// convert MSG to a byteBuf
ByteBuf is provided by Netty, not NIO's ByteBuffer.
ByteBuf byteBuf = (ByteBuf) msg;
System.out.println("The client sends the following message:"+byteBuf.toString(CharsetUtil.UTF_8));
System.out.println("The client address is:"+channel.remoteAddress());
}
// The data has been read
@Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
ctx.writeAndFlush(Unpooled.copiedBuffer("Hello, client",CharsetUtil.UTF_8));
}
// To handle exceptions, it is usually necessary to close the channel
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { ctx.close(); }}Copy the code
NettyClient
public class NettyClient {
public static void main(String[] args) throws InterruptedException {
// As a client, it is simpler
// First create an event loop group to keep the connection to the server
NioEventLoopGroup group = new NioEventLoopGroup();
try{
// Create a client boot object to generate a Netty client
// The server uses -serverBootstrap
Bootstrap bootstrap = new Bootstrap();
// After creating a bootStrap object, you can set the parameters of the object
bootstrap
.group(group)The first step is to set up an event loop group
.channel(NioSocketChannel.class)
.handler(new ChannelInitializer<SocketChannel>() { // The second step is to set the handler
@Override
protected void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(newNettyClientHandler()); }});// After the boot class is set, bind the port
// ChannelFuture involves netty's asynchronous model
ChannelFuture channelFuture = bootstrap.connect("127.0.0.1".6668).sync();
// Set listener for channel closure,?? Where do I write the code to listen?
channelFuture.channel().closeFuture().sync();
}finally{ group.shutdownGracefully(); }}}Copy the code
NettyClientHandler
public class NettyClientHandler extends ChannelInboundHandlerAdapter {
// This method is triggered when the channel is ready
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
System.out.println("client:"+ctx);
// When ready, send a message to the server
ctx.writeAndFlush(Unpooled.copiedBuffer("Hello server", CharsetUtil.UTF_8));
}
// There is a read event in the channel
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuf byteBuf= (ByteBuf) msg;
System.out.println("Server reply message:"+ byteBuf.toString(CharsetUtil.UTF_8));
System.out.println("Server address:"+ctx.channel().remoteAddress());
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { cause.printStackTrace(); ctx.close(); }}Copy the code
The service side
- The default number of workGroup threads is: Number of CPU cores x 2. If the number of local CPU cores is 4, the number of workGroup threads is 8
- Each time a client establishes a connection, it allocates a thread for processing
- When the ninth client arrives, it starts processing from scratch, using the first thread
The client
5.8.7 Tasks in the Task Queue
- Common tasks that are customized by user programs
ctx.channel().eventLoop().execute(new Runnable() { @Override public void run() { try { Thread.sleep(10000); } catch (InterruptedException e) { e.printStackTrace(); } ctx.writeAndFlush(Unpooled. CopiedBuffer ("Hello, client 002", charsetutil.utf_8)); }});Copy the code
- A scheduled task is user-defined
ctx.channel().eventLoop().schedule(new Runnable() {
@Override
public void run(a) {
try {
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
ctx.writeAndFlush(Unpooled.copiedBuffer("Hello, client 002",CharsetUtil.UTF_8)); }},5, TimeUnit.SECONDS);
Copy the code
- The non-current Reactor thread calls various methods of a Channel
For example, in the business thread of the push system, the corresponding Channel reference is found according to the user’s identity, and then the Write class method is called to push the message to the user, which will enter into this scenario. The final Write is submitted to the task queue and consumed asynchronously
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.channel.ChannelPipeline;
import io.netty.util.CharsetUtil;
import java.util.concurrent.TimeUnit;
/** * Define a channel inbound adapter */
public class NettyServerHandler extends ChannelInboundHandlerAdapter {
// Reads the message sent from the client and calls this method as soon as there is a message
ChannelHandlerContext The ChannelHandlerContext object, which contains pipes, addresses, and channels
// Channel: is the data transfer between the server and the client, pipeline: is the Handler chain collection of business logic
// MSG is the message sent by the client
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
// Solution 1 Common tasks customized by user programs
ctx.channel().eventLoop().execute(new Runnable() {
@Override
public void run(a) {
try {
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
ctx.writeAndFlush(Unpooled.copiedBuffer("Hello, client 002",CharsetUtil.UTF_8)); }});Solution 2: User-defined scheduledTask - This task is submitted to the scheduledTaskQueue
ctx.channel().eventLoop().schedule(new Runnable() {
@Override
public void run(a) {
try {
Thread.sleep(10000);
} catch (InterruptedException e) {
e.printStackTrace();
}
ctx.writeAndFlush(Unpooled.copiedBuffer("Hello, client 002",CharsetUtil.UTF_8)); }},5, TimeUnit.SECONDS); }}Copy the code
Small 5.8.8 summary
-
Netty abstracts two groups of thread pools, BossGroup for receiving client connections and WorkerGroup for network read and write operations.
-
A NioEventLoop represents a thread that executes processing tasks in a continuous loop, and each NioEventLoop has a selector that listens for the socket network channel bound to it.
-
NioEventLoop is serialized internally. The IO thread NioEventLoop is always responsible for reading -> decoding -> processing -> encoding -> sending messages
- NioEventLoopGroup contains multiple NioEventLoops
- Each NioEventLoop contains a Selector and a taskQueue
- Multiple NioChannels can be registered to listen on each NioEventLoop Selector
- Each NioChannel is bound to a unique NioEventLoop
- Each NioChannel is bound to its own ChannelPipeline
5.9 Asynchronous Model
5.9.1 Basic Introduction
-
Asynchrony is the opposite of synchronization. When an asynchronous procedure call is made, the caller does not get the result immediately. The component that actually handles the call notifies the caller through status, notifications, and callbacks when it completes.
-
Netty I/O operations are asynchronous. Bind, Write, and Connect operations simply return a ChannelFuture.
-
The caller does not get the result immediately, but through the Future-listener mechanism, the user can easily obtain the IO operation result actively or through the notification mechanism
-
Netty’s asynchronous model is based on Future and callback. Callback is a callback. Focusing on Future, its core idea is that if you assume a method fun, the computation process can be very time-consuming, and waiting for Fun to return is obviously not appropriate. When fun is called, a Future is returned, and the Future is used to monitor the fun process (i.e., future-listener mechanism)
5.9.2 Future instructions
-
Represents the result of asynchronous execution and provides methods to detect completion of execution, such as retrieval calculations, and so on.
-
ChannelFuture extends Future Public Interface ChannelFuture extends Future We can add listeners that are notified when a listening event occurs. Case description
5.9.3 Schematic diagram of working principle
Description:
-
When programming with Netty, intercepting operations and converting inbound and outbound data only requires you to provide a callback or leverage a Future. This makes chaining simple, efficient, and good for writing reusable, generic code.
-
The goal of the Netty framework is to separate your business logic from the network base application code
5.9.4 Future – the mechanism of the Listener
-
When the Future object is newly created and in an incomplete state, the caller can obtain the state of the operation execution through the returned ChannelFuture, registering the listener function to perform the completed operation.
-
Common operations are as follows
- The isDone method is used to check whether the current operation is complete.
- The isSuccess method is used to determine whether the completed current operation is successful.
- The getCause method is used to get the cause of the failure of the current completed operation.
- Determine whether the completed current operation isCancelled by isCancelled method;
- The addListener method is used to register listeners. When the operation completes (isDone returns complete), the specified listener is notified. Notifies the specified listener if the Future object is completed
Example: The binding port is an asynchronous operation, when the binding operation is finished, the corresponding listener processing logic will be invoked
// Bind a port and synchronize, generating a ChannelFuture object to start the server
ChannelFuture cf = bootstrap.bind(6668).sync();
cf.addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if(future.isSuccess()){
System.out.println("Listening successful..."+6668); }}});Copy the code
5.10 Quick Start Example -HTTP Service
-
Example Requirements: Create a Netty project using IDEA
-
The Netty server is listening on port 6668, and the browser is requesting “http://localhost:6668/”
-
The server can reply to the client with a message “Hello! I am server 5 “and filter for specific request resources.
TestServer
import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
public class TestServer {
public static void main(String[] args) throws InterruptedException {
NioEventLoopGroup bossGroup = new NioEventLoopGroup(1);
NioEventLoopGroup workerGroup = new NioEventLoopGroup();
try{
ServerBootstrap serverBootstrap = new ServerBootstrap();
serverBootstrap.group(bossGroup,workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new TestServerInitializer());
ChannelFuture channelFuture = serverBootstrap.bind(8990).sync();
channelFuture.addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if(future.isSuccess()){
System.out.println("Start listening."); }}});// Channel end event
channelFuture.channel().closeFuture().sync();
}finally{ bossGroup.shutdownGracefully(); workerGroup.shutdownGracefully(); }}}Copy the code
TestServerInitializer
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.socket.SocketChannel;
import io.netty.handler.codec.http.HttpServerCodec;
public class TestServerInitializer extends ChannelInitializer<SocketChannel> {
@Override
protected void initChannel(SocketChannel ch) throws Exception {
// Get a pipe
ChannelPipeline pipeline = ch.pipeline();
//1. HttpServerCodec is a netty codec for handling HTTP
pipeline.addLast("MyHttpServerCodec".new HttpServerCodec());
pipeline.addLast("MyTestHttpServerHandler".newTestHttpServerHandler()); }}Copy the code
TestHttpServerHandler
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;
import io.netty.handler.codec.http.*;
import io.netty.util.CharsetUtil;
import java.net.URI;
import java.net.URL;
import java.nio.charset.Charset;
/ / 1. SimpleChannelInboundHandler is ChannelInboundHandlerAdapter
// 2. HttpObject Data exchanged between the client and server is encapsulated as HttpObject
public class TestHttpServerHandler extends SimpleChannelInboundHandler<HttpObject> {
@Override
protected void channelRead0(ChannelHandlerContext ctx, HttpObject msg) throws Exception {
// Check whether MSG is an httpRequest request
if (msg instanceof HttpRequest) {
// Due to the nature of the Http protocol, there is a new pipe and channel each time a request comes in
System.out.println("pipeline hashcode" + ctx.pipeline().hashCode() + " TestHttpServerHandler hash=" + this.hashCode());
System.out.println("Type MSG =" + msg.getClass());
System.out.println("Client address" + ctx.channel().remoteAddress());
HttpRequest httpRequest = (HttpRequest) msg;
// Filter resources
// Get the URL object
URI uri = new URI(httpRequest.uri());
if (uri.getPath().equals(" /favicon.ico")) {
return;
}
// Send a message to the browser to build content related to the Http response
//1. Format the data object and use Unpooled to convert it to ByteBuf
ByteBuf content = Unpooled.copiedBuffer("Hello, this is the server", CharsetUtil.UTF_8);
//2. Construct a corresponding HTTP, i.e. HttpResponse, need to set the version number, status code, content data
FullHttpResponse response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK, content);
//3. Set the associated header file
response.headers().set(HttpHeaderNames.CONTENT_TYPE,"text/plain");
response.headers().set(HttpHeaderNames.CONTENT_LENGTH,content.readableBytes());
//4. Drop it into the context, which forwards the message to the next business processing unit until it is returned to the browserctx.writeAndFlush(response); }}}Copy the code
Demonstration effect
[Client]
[Browser]