Talking about BIO and NIO in combination with the code, the code is all pseudo-code that can run successfully

BIO (Blocking IO)

The earliest model of network communication

The accept connection is blocked, as is the read from the connected client. The solution is to open a thread to process the client’s input when a connection is received. Otherwise, the client will not send data and the server will block waiting for its data to be sent. You can’t listen for requests from other clients during that time, so create per thread per connection

import java.io.*;
import java.net.InetSocketAddress;
import java.net.ServerSocket;
import java.net.Socket;


public class SocketBIO {

    public static void main(String[] args) throws Exception {

        ServerSocket server = null;

        try {
            server = new ServerSocket(9090);
/ / server. The bind (new InetSocketAddress (9090) "127.0.0.1,"); // Or you can write it like this

            System.out.println("step1: new ServerSocket(9090)");

            while (true) {

                Socket client = server.accept();1 / / jam

                System.out.println("step2: client port: " + client.getPort());

                // Create per thread per connection to prevent server blocking
                new Thread(new Runnable() {

                    @Override
                    public void run(a) {
                        InputStream inputStream = null;
                        try {
                            inputStream = client.getInputStream();//NIO encapsulates Socket getInputStream() and getOutputStream(). Socketchannels are readable and writable
                            BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream));
                            while (true) {
                                System.out.println(Step3: readClient: "");
                                String s = reader.readLine();2 / / jam
                                System.out.println("step3: read: " + s + "from client: "+ client.getPort()); }}catch(IOException e) { e.printStackTrace(); }}});// block read

                // Get the input character stream
// BufferedReader reader = new BufferedReader(new InputStreamReader(
// client.getInputStream()
/ /));
// // gets the output character stream
// BufferedWriter writer = new BufferedWriter(new OutputStreamWriter(
// client.getOutputStream()
/ /));
//
// while (true) {
// String s = reader.readLine(); 2 / / jam
// System.out.println("client[" + client.getPort() + "]:" + s);
//
// writer.write(s + "\n");
// writer.flush();
//// writer.close();
//
// if (s.equals("quit")) {
// System.out.println("client close!" );
// break;
/ /}
/ /}}}catch (IOException e) {
            e.printStackTrace();
        } finally {
            try {
                if(server ! =null) { server.close(); }}catch(IOException e) { e.printStackTrace(); }}}}Copy the code

Because BIO blocks every thread created at the beginning of each connection, which is too wasteful, some connections may never send data, so you have NIO non-blocking

NIO (Non-blocking IO/New IO)

Non-blocking mode synchronizes non-blocking network communication model

import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.ServerSocketChannel;
import java.nio.channels.SocketChannel;
import java.util.LinkedList;
import java.util.concurrent.TimeUnit;


/** * The transition from BIO to NIO, this is not really NIO, * because NIO in Java consists of: SocketChannel, Selector, configureBlocking(), SelectionKey, Handler, ByteBuffer The SelectionKey encapsulates a SocketChannel's Selector registry. The SelectionKey is not used in the NIO Selector multiplexer. It's not really NIO, NIO uses a multiplexer, but BIO can do it when there's not a lot of clients, and there's not a lot of clients. * /
public class SocketNIO {

    public static void main(String[] args) {
        LinkedList<SocketChannel> clients = new LinkedList<>();

        try {
            // In the BIO, getInputStream() and getOutputStream() are used to read and write data.
            // In NIO, the data is stored in the Buffer Buffer, and the read and write data are transmitted using the Buffer, ServerSocketChannel and SocketChannel channels. Buffer is used to package data, and Channel is used to transmit data.
            // Channel and Buffer complete the transmission of data in the network
            // Channel features:
            // 1. Use configureBlocking(false) to set it to non-blocking, and read(), accept(), and connect() do not block
            ServerSocketChannel and SocketChannel inherit from SelectableChannel Abstract class. So there's a register(Selector sel, int OPS) that registers itself to a Selector.
            ServerSocketChannel and SocketChannel are thread-safe
            ServerSocketChannel, SocketChannel is an abstract class. Call open() to create a SocketChannel object
            ServerSocketChannel ssc = ServerSocketChannel.open();// Enable the listening function by binding the server port
            ssc.bind(new InetSocketAddress(9090));
            ServerSocketChannel and SocketChannel use blocking mode by default in NIO
            ssc.configureBlocking(false); // Set false to non-blocking mode, and subsequent Accept () will not block

            while (true) {
                try {
                    TimeUnit.SECONDS.sleep(1);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
                SocketChannel client = ssc.accept();// Receive client connection, no block, no client connection return value null in Linux -1.
                // If there is a client connection, accept returns the fd and client object of the client
                // NONBLOCKING means that the code can go down, but with a different case

                if (client == null) {
                    System.out.println("client is null");
                } else {
                    client.configureBlocking(false);// Access to data from a non-blocking client is not blocking
                    int port = client.socket().getPort();//getPort() returns the remote port to which the socket is connected
                    System.out.println("client port:" + port);
                    clients.add(client);
                }

                // In the BIO, InputStream and OutputStream are used to operate on arrays that store data.
                // In NIO, the data transmitted in the network is stored in the Buffer Buffer. The read and write operations of data are operated on the Buffer, that is, the API of the used Buffer. Buffer is a container for storing data, and the operation of data is of course the operation container
                // The Buffer layer also uses arrays, but provides apis for manipulating arrays such as get() and put().
                
                // Buffer is divided into direct Buffer and non-direct Buffer.
                // The direct cache is created in the user space of the currently running JVM, and also in the kernel space.
                There is no space in the JVM, so using indirect caches for data reads and writes reduces the copy layer (JVM) to zero copy (netty also has this concept) and improves performance
                // Buffer is not thread safe
                // allocateDirect()/allocate() creates Buffer for read operations, where direct buffers are created
                ByteBuffer buffer = ByteBuffer.allocateDirect(1024); Allocate (); allocate(); bytebuffer.wrap (byte[] array)

                // Check whether the connected client can read and write data. In NIO, the Buffer is read and write
                for (SocketChannel cli : clients) {
                    // Read () : The position property of the parent Buffer of ByteBuffer moves to the next readable or writable position, so flip() should be called after read() to reset limit = position and position to 0
                    int read = cli.read(buffer); // The read data is stored in Buffer, then the operation on Buffer. >0 has data -1 has no data and read() does not block
                    if (read > 0) {
                        //1) rewind();
                        //2) Clear ();
                        //3) The flip() method focuses on substring interception.
                        buffer.flip();
                        byte[] aaa = new byte[buffer.limit()];
                        buffer.get(aaa);//get(byte[] DST) transfers contiguous bytes from the Buffer to the byte[] DST target array
                        String b = new String(aaa);
                        System.out.println(cli.socket().getPort() + ":" + b);
                        buffer.clear();  // Reset the Buffer to facilitate the next operation on the same Buffer

                        Buffer.array () cannot be called to return arrays in the JVM because the direct Buffer created by allocateDirect() is a direct Buffer
// System.out.println(cli.socket().getPort() + ":" + new String(buffer.array()));
// ByteBuffer bufferWrite = ByteBuffer.wrap("HelloClient".getBytes());

                        ByteBuffer bufferWrite = ByteBuffer.wrap(b.getBytes());
                        cli.write(bufferWrite);/ / write Buffer}}}}catch(IOException e) { e.printStackTrace(); }}}Copy the code
  1. ServerSocketChannel, SocketChannel: ServerSocket is used in BIO, Socket is used in NIO socketChannels are derived from SelectableChannel Abstract Class. So they all have a Selector sel, int OPS, that can add itself to some Selector multiplexer. Accept (), read(), and connect() are all non-blocking using configureBlocking(false). In NIO, ServerSocketChannel and SocketChannel read and write operations are performed on the Buffer Buffer

    Socketchannels are thread-safe

  2. ByteBuffer: inherited from the Buffer abstract class. Buffers are divided into: AllocateDirect () creates the direct buffer, allocate() creates the indirect buffer, allocate() creates the indirect buffer, HeapByteBuffer. In traditional IO, InputStream is used to operate directly on array, and there are few ARRAY apis. In NIO, Buffer is used to encapsulate array, and some apis for operating array can be used directly. So buffers are read/written instead of arrays in traditional IO

    Buffers are not thread-safe

Socketchannels that have been connected to the server are stored in a linked list, so that the Connected SocketChannels are constantly polling to see if any data is sent to them. ServerSocketChannel is also constantly listening for new client connection requests. However, this creates a resource consumption that is always polling, so there is an important efficiency improvement and resource consumption saving component Selector multiplexer in NIO

import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.*;
import java.util.Iterator;
import java.util.Set;


If a call to epoll_wait() returns a large number of socket FDS, then we can use divide-and-conquer to separate connection requests from established connections with data sent to them. * If there are a lot of requests that need to be connected, or if there are already a lot of requests that need to be connected, you can create multiple Selector multiplexers that handle either connection requests or requests that need to be sent. * Hint: Thread pools are provided in Netty to handle requests for data. In NIO, thread pools need to be written manually * 

* multithreading. One thread handles sockets that need to be connected, and multiple threads handle sockets that need to be read or written. * /

public class SocketMultiplexingSingleThreadv1 { // In the BIO, getInputStream() and getOutputStream() are used to read and write data. // In NIO, the data is stored in the Buffer Buffer, and the read and write data are transmitted using the Buffer, ServerSocketChannel and SocketChannel channels. Buffer is used to package data, and Channel is used to transmit data. // Channel and Buffer complete the transmission of data in the network // Channel features: // 1. Use configureBlocking(false) to set it to non-blocking, and read(), accept(), and connect() do not block ServerSocketChannel and SocketChannel inherit from SelectableChannel Abstract class. So there's a register(Selector sel, int OPS) that registers itself to a Selector. ServerSocketChannel and SocketChannel are thread-safe ServerSocketChannel, SocketChannel is an abstract class. Call open() to create a SocketChannel object private ServerSocketChannel server = null; private Selector selector = null; // Multiplexer int port = 9090; // Selector: a multiplexer that can listen for multiple channels registered to its Selector at a time. When registering, it needs to specify which action of the Channel to listen for, that is, the event event. When the specified event occurs, the Channel to listen for will be returned. // When using the BIO, you can only query to see if the connected client has sent data. With Selector, you only need to call it once to listen for the state of multiple channels // We use the synchronous IO model, where a Selector simply returns a stateful Channel, and then the server handles the stateful Channel // Different operating systems use different selectors. Windows Select, Linux Select, poll, epoll (because epoll is used efficiently), MAC kqueue, Selector returns different data depending on which Selector is used. Using select returns all channels registered to the Selector. Using epoll returns only channels registered to the Selector that are stative (that is, the specified event occurred) ServerSocketChannel can only specify accept events when registered with Selector. SocketChannel can specify connect, read, and write events // SelectionKey: A SelectionKey is created when a Channel is registered with a Selector. It has properties such as Channel, Selector, interestOps, readyOps, and so on // Selector maintains three sets: // 1. HashSet keys stores selectionkeys for all socketchannels registered to that Selector // 2. HashSet selectedKeys Stores the SelectionKey corresponding to a stateful SocketChannel // 3. HashSet cancelledKeys Stores the SelectionKey corresponding to the cancelled SocketChannel public void initServer(a) { try { server = ServerSocketChannel.open(); server.configureBlocking(false); server.bind(new InetSocketAddress(port)); // In the case of select and poll, it simply creates a space in the program's JVM to store future clients in the program first, and then pass all clients to the kernel when the select() and poll() system calls are called. // Open --> epoll_create --> fd3 selector = Selector.open();//select, poll, epoll Epoll is preferred but can be modified // Server is approximately equal to fd4 of the LISTEN state /* register if: select, poll: store fd4 socket in the space created by the above program epoll: Epoll_ctrl (fd3,ADD,fd4,EPOLLIN) {accept(), read(); Now, it only needs to call select(), poll(), epoll() to pass the socket that receives the connection and the socket that reads the data to the kernel. The kernel senses the connection from the client or the data from the client through the signal driver, and then returns these ststate sockets to the program. Whereas SELECT returns all client sockets, epoll returns only stateful sockets */ server.register(selector, SelectionKey.OP_ACCEPT); } catch(IOException e) { e.printStackTrace(); }}public void start(a) { initServer(); System.out.println("Server started..."); try { while (true) { // Set keys = selector.keys(); Keys () gets the selectionkeys of all socketchannels registered on the Selector // System.out.println(keys.size() + " size"); Poll_wait (); poll_wait (); poll_wait (); poll_wait () Poll (); poll(fd4); poll(fd4); The kernel's epoll_wait() epoll_wait() parameter is about the same as the select() poll() parameter. The poll() parameter can set a timeout for a block with time. There is no time to wait until at least one stateful socket is returned while (selector.select(500) > 0) { // Returns the number of changes to the Selector's selectedKeys ready queue Set<SelectionKey> selectionKeys = selector.selectedKeys(); // Returns the fd set of stateful Socketchannels registered with Selector, the selectedKeys set Iterator<SelectionKey> iterator = selectionKeys.iterator(); // Whatever Selector multiplexers you have, you can only give me back the state, and I have to process their R/W one by one to synchronize the IO model // Without the Selector multiplexer, the system call needs to be made to each FD (i.e. the client socket), which wastes resources. Using the Selector multiplexer, multiple ststate FD information can be obtained only once while (iterator.hasNext()) { SelectionKey next = iterator.next(); iterator.remove();// Not removing from keySet will be repeated // NIO used to call accept() and recv() to see if there was any data transfer coming. Now we just need to pass all sockets to the multiplexer, and separate the sockets returned by the multiplexer if (next.isAcceptable()) { // Whether a new client connection is ready // This is the point, if you want to receive a new connection // Semantically, accept accepts the connection and returns the FD of the new connection // What about the new FD? //select, poll: since there is no space in their kernel, store them in the JVM along with the previous FD4 listen //epoll: Use epoll_ctrl to register the new client into the kernel space acceptHandler(next); } else if (next.isReadable()) { // Are you ready to read readHandler(next);//R/W are all processed // This method may block in the current thread, so it can be handled asynchronously using thread pools. Thread pools are available in Netty, and custom thread pools are also available / / remove: // Add IO Threads // Redis uses the concepts of epoll, IO Threads, redisIO Threads, worker Threads // Tomcat8, 9 asynchronous processing mode IO and processing decoupling } } } } } catch(IOException e) { e.printStackTrace(); }}private void readHandler(SelectionKey next) { SocketChannel sc = null; try { sc = (SocketChannel) next.channel(); ByteBuffer buffer = ByteBuffer.allocate(512); buffer.clear(); int read = sc.read(buffer); // The read data is stored in Buffer, then the operation on Buffer if(read ! = -1) { byte[] array = buffer.array(); System.out.println(sc.socket().getPort() + ":" + new String(array)); // Because allocate() creates a non-direct Buffer, arrays exist in the JVM, so buffer.array() returns arrays in the JVMByteBuffer bufferWrite = ByteBuffer.wrap(array); sc.write(bufferWrite); }}catch (IOException e) { e.printStackTrace(); } // finally { // if (sc ! = null) { // try { // sc.close(); // } catch (IOException e) { // e.printStackTrace(); / /} / /} / /} } private void acceptHandler(SelectionKey next) { try { ServerSocketChannel ssc = (ServerSocketChannel) next.channel(); SocketChannel client = ssc.accept();// Call ACCEPT to receive client FD7 client.configureBlocking(false); // Register is called Poll: epoll_ctrl(fd3,ADD,fd7,EPOLLIN) */ client.register(selector, SelectionKey.OP_READ); System.out.println("-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -"); System.out.println("New client :" + client.getRemoteAddress()); System.out.println("-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -"); } catch(IOException e) { e.printStackTrace(); }}public static void main(String[] args) { SocketMultiplexingSingleThreadv1 singleThreadv1 = newSocketMultiplexingSingleThreadv1(); singleThreadv1.start(); }}Copy the code

In addition to the components already listed, NIO includes:

  1. The Selector: A multiplexer can listen for multiple SocketChannel requests at once, that is, it does not need to be polling for a request to be sent. When a SocketChannel event registered with a Selector occurs, Returns a SocketChannel

    In the previous code, we need to listen for data to be sent from the connected SocketChannel and read and write the requests sent from it. We also need to listen for any SocketChannels that want to establish a connection with the ServerSocketChannel and set up the connection. Now, to use a Selector to do a call listening and return multiple Socketchannels, you need to register socketchannels with the Selector, and the Selector also provides the event type that you want to listen for when you register it. The SocketChannel object calls the Public Final SelectionKey Register (Selector SEL, int OPS). When a SocketChannel event occurs, the SocketChannel registered to the Selector is returned. Then you can use Selector. SelectedKeys () to return the SocketChannel that you are listening to. Linux has multiple selectors, but because epoll is good, Windows doesn’t have epoll, it uses Select, and Macs use kqueue, if epoll is used, Return only stateful socketchannels. If you select, all socketchannels registered to a Selector are returned, regardless of whether they are stateful or not. In this way, the corresponding operations can be performed for different events

    Event: an event that can be specified when a Selector is registered, including READ, WRITE, CONNECT, and ACCEPT. If an event occurs, SocketChannel will be returned

  2. SelectKey: A SocketChannel has a key when it registers with a Selector, and different socketchannels have different keys when they register with a Selector

  3. Handler: Event Handler An event handler that performs different processing for different events

(The main components are listed above)

Now we can think of a distinction between the initial blocking waiting for a client request, the non-blocking polling for read/write requests from established clients, and listening for read/write/establish connection requests from multiple Socketchannels using Selector. Different request types are also handled accordingly

All of the socketchannels that are being listened to are registered with the same Selector. If there are already multiple Socketchannels on a Selector, that Selector can take a lot of time to process, and the client that wants to establish a connection can wait a long time. So you can think of divide-and-conquer to separate all the requests, you can still have one Selector to listen for all the requests, but you can use a thread pool to asynchronously process the SocketChannel of the read and write operations to improve the efficiency of processing the requests, or you can use multiple selectors, One Selector handles connection requests, and multiple selectors handle read and write requests, because you have multiple selectors working together and the server can respond quickly to a large number of client requests. If you already have multiple selectors handling read/write requests, and you want to improve the efficiency of the server handling the request, you can also use a thread pool for each Selector handling the read/write request asynchronously, again improving the server response speed.

So at first, such as a telephone operator to handle all including the request of the customers, old and new customers, after a special commissioner to deal with something, only requires new customer visit for the first time contact member caller recommends a commissioner to deal with, and old customers to visit every time after the contact commissioner, caller only responsible for the new customer’s visit, the same logic

import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.*;
import java.util.Iterator;
import java.util.Set;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;


/** * If the application is single-threaded and there are many connected clients, the return value of the call to epoll_wait will be large, the single-threaded process will take a long time, and the next call to epoll_wait will take a long time. * Also, adding a new client to the maintenance listener queue will take a long time because of a lot of data processing from the connection client * 

* so you can use multithreading, and netty is multithreading, Each thread has a Selector multiplexer *

* Receiving connections from new clients and processing requests from connected clients can be done in parallel (parallelism: multi-core cpus can process multiple tasks at the same time, concurrently: The same time multiple client requests) server, there is no strong fluctuation dependencies between them, * so you can create a thread for processing requests for them all, and single thread to handle all the connected clients request and receive a new client connection is a very time-consuming, can be multithreaded to deal with the connected client requests, Divide-and-conquer */ is used

public class SocketMultiplexingThreadsTest2 { private static Selector listener; private static Selector work1; private static Selector work2; // Initialize the ServerSocketChannel of the server private static void initServer(a) { try { // In the BIO, getInputStream() and getOutputStream() are used to read and write data. // In NIO, the data is stored in the Buffer Buffer, and the read and write data are transmitted using the Buffer, ServerSocketChannel and SocketChannel channels. Buffer is used to package data, and Channel is used to transmit data. // Channel and Buffer complete the transmission of data in the network // Channel features: // 1. Use configureBlocking(false) to set it to non-blocking, and read(), accept(), and connect() do not block ServerSocketChannel and SocketChannel inherit from SelectableChannel Abstract class. So there's a register(Selector sel, int OPS) that registers itself to a Selector. ServerSocketChannel and SocketChannel are thread-safe ServerSocketChannel, SocketChannel is an abstract class. Call open() to create a SocketChannel object ServerSocketChannel ssc = ServerSocketChannel.open(); ssc.configureBlocking(false); int port = 9090; ssc.bind(new InetSocketAddress(port)); listener = Selector.open(); work1 = Selector.open(); work2 = Selector.open(); // Selector: a multiplexer that can listen for multiple channels registered to its Selector at a time. When registering, it needs to specify which action of the Channel to listen for, that is, the event event. When the specified event occurs, the Channel to listen for will be returned. // When using the BIO, you can only query to see if the connected client has sent data. With Selector, you only need to call it once to listen for the state of multiple channels // We use the synchronous IO model, where a Selector simply returns a stateful Channel, and then the server handles the stateful Channel // Different operating systems use different selectors. Windows Select, Linux Select, poll, epoll (because epoll is used efficiently), MAC kqueue, Selector returns different data depending on which Selector is used. Using select returns all channels registered to the Selector. Using epoll returns only channels registered to the Selector that are stative (that is, the specified event occurred) ServerSocketChannel can only specify accept events when registered with Selector. SocketChannel can specify connect, read, and write events // SelectionKey: A SelectionKey is created when a Channel is registered with a Selector. It has properties such as Channel, Selector, interestOps, readyOps, and so on // Selector maintains three sets: // 1. HashSet keys stores selectionkeys for all socketchannels registered to that Selector // 2. HashSet selectedKeys Stores the SelectionKey corresponding to a stateful SocketChannel // 3. HashSet cancelledKeys Stores the SelectionKey corresponding to the cancelled SocketChannel //ServerSocketChannel can only specify ACCEPT events when registered with Selector, and SocketChannel can have three other events besides ACCEPT: CONNECT, READ, and WRITE ssc.register(listener, SelectionKey.OP_ACCEPT); } catch(IOException e) { e.printStackTrace(); }}public static void main(String[] args) { initServer(); NewThread accept = new NewThread(listener, 2); NewThread read1 = new NewThread(work1); NewThread read2 = new NewThread(work2); accept.setName("listener"); read1.setName("read1"); read2.setName("read2"); accept.start(); // Let the listener thread run first try { TimeUnit.MILLISECONDS.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } read1.start(); read2.start(); System.out.println("Server started..."); try { System.in.read(); // Block the server from shutting down } catch (IOException e) { e.printStackTrace(); } System.out.println("server end!"); } / / the server public static class NewThread extends Thread { private Selector selector; private volatile static LinkedBlockingQueue<SocketChannel>[] queues; private static int queLen; private final AtomicInteger rand = new AtomicInteger(0); // Place it randomly in a queue private static final AtomicInteger index = new AtomicInteger(0); // Randomly specifies the subscript of the queue array for the current thread private int id; //NIO Buffer buffers are not thread-safe. There is one Buffer per thread, and all Buffer reads and writes within a thread are synchronized private ByteBuffer buffer; public NewThread(a) {}// The thread that processes the ACCEPT request public NewThread(Selector selector, int queNum) { this.selector = selector; queues = new LinkedBlockingQueue[queNum]; queLen = queNum; for (int i = 0; i < queues.length; i++) { queues[i] = new LinkedBlockingQueue<>(); } id = -1; Queues [0]; queues[0]; queues[0]; queues[0]; queues[0 System.out.println("Boss start"); } // The thread that handles the READ request public NewThread(Selector selector) { this.selector = selector; id = index.getAndIncrement() % queLen; this.buffer = ByteBuffer.allocate(512); // Since buffers are not thread-safe, there is only one Buffer per thread, so processing of buffers within each thread is synchronous System.out.println("worker: " + id + "Start"); } @Override public void run(a) { try { while (true) { // System.out.println(Thread.currentThread().getName() + " id: " + id + ", Selector's keys: "); // Set keys = selector.keys(); // for (SelectionKey key : keys) { // int localPort = 0; // if (key.channel() instanceof ServerSocketChannel) { // ServerSocketChannel channel = (ServerSocketChannel) key.channel(); // localPort = channel.socket().getLocalPort(); // Returns the local port to which this socket connects / /} // if (key.channel() instanceof SocketChannel) { // SocketChannel channel = (SocketChannel) key.channel(); // localPort = channel.socket().getPort(); // Returns the remote port to which this socket connects / /} // System.out.print(localPort + " "); / /} // System.out.println(); //NIO is a synchronous non-blocking communication model, because you just call the Selector to get the ststate client, and you still have to process the request yourself Select () returns the number of stateful clients that are ready if (selector.select(500) > 0) { // Returns the number of changes to the Selector's selectedKeys ready queue Set<SelectionKey> selectionKeys = selector.selectedKeys(); Iterator<SelectionKey> iterator = selectionKeys.iterator(); while (iterator.hasNext()) { SelectionKey next = iterator.next(); // Remove selectedKeys from the ready queue of Selector. If you don't remove it, the Selector's selectedKeys will always be in the ready queue, causing the next time a client sends data, Select (500) will only return 0, so the server will only respond to the first request from the client. Then how can the client make a request and the server does not respond iterator.remove(); // The server receives and processes only one line of data from the client if (next.isReadable()) { // Determine whether the SocketChannel of SelectionKey can be READ by the thread that processes the READ request // If you want to improve efficiency, you can use thread pools for asynchronous processing readHandler(next); } else if (next.isAcceptable()) { // The thread that processes the ACCEPT requestacceptHandler(next); }}}Get your eye on here: The listener thread cannot perform this; only the read thread needs to read the SocketChannel of the client in the queue if (id >= 0 && queues[id].size() > 0) { SocketChannel poll = queues[id].poll(); // The use of take() in TODO is inappropriate, because the current thread does not only have an action added to the current thread's Selector, but also listens for the state of the client registered on the current thread's Selector, so it should not be blocked with take() poll.register(selector, SelectionKey.OP_READ); System.out.println("= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = ="); System.out.println("New client:" + poll.socket().getPort() + "Assigned to:" + id); System.out.println("= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = ="); } // You can perform operations other than processing client requests. For example, in Redis, you can also perform timed tasks such as reclaiming memory... . // So it should be if instead of while}}catch(IOException e) { e.printStackTrace(); }}// The ServerSocketChannel that handles the ACCEPT request adds the SocketChannel it listens to to the queue, waiting for another thread to add it to its own Selector public void acceptHandler(SelectionKey sk) { try { ServerSocketChannel ssc = (ServerSocketChannel) sk.channel(); System.out.println(ssc.socket().getLocalPort() + " server is accept"); SocketChannel client = ssc.accept(); client.configureBlocking(false); int i = rand.getAndIncrement() % queLen; queues[i].offer(client); } catch(IOException e) { e.printStackTrace(); }}// If you want to improve efficiency, you can use thread pools for asynchronous processing public void readHandler(SelectionKey sk) { try { SocketChannel client = (SocketChannel) sk.channel(); System.out.println(client.socket().getPort() + " client is read"); //ByteBuffer is thread-safe // ByteBuffer buffer=ByteBuffer.allocate(); buffer.clear(); // Since the same ByteBuffer is used for multiple operations within a thread, the server responds to the last repeat of the client's data by either creating a new Buffer each time it is read in the current thread and then using buffer.array(), or manually creating an array. Then use buffer.get(array) int read = client.read(buffer); // Read/write operations to Buffer move the position property buffer.flip(); if (read > 0) { //array() returns an array of indirect buffers in the JVM. When data is transferred, it is copied in the user space as well as in the kernel space. Using a non-direct Buffer creates an array in the JVM as a Buffer Buffer. Direct buffers do not create space in the JVM // If the same non-direct Buffer is used multiple times, the arR of the Buffer will be repeated. To avoid this, the ARR of the Buffer can not be returned directly. Create an ARR to store data in a non-direct buffer // String s = new String(buffer.array()); // System.out.println(client.socket().getPort() + ": " + s); // buffer.clear(); / / optimization: byte[] copyArr = new byte[buffer.limit()]; buffer.get(copyArr); // Call get() to store the data in buffer into the ARR String s = new String(copyArr); System.out.println(client.socket().getPort() + ":" + s); client.write(ByteBuffer.wrap(s.getBytes())); if (s.equals("quit")) { sk.cancel(); // If the client sends "quit", disconnect the client from the server}}}catch(IOException e) { e.printStackTrace(); }}}}Copy the code

One listen thread handles accept requests, two read threads handle Read requests, and each thread has a Selector, The Listen thread randomly assigns the new SocketChannel it listens to to one of the read threads to register with the Selector in the thread. This uses queue Array as a container for communication between threads. Use thread-safe containers to store Socketchannels to avoid registering the same SocketChannel with multiple threads’ selectors. So divide and conquer. The next step is to do different processing for different request types

Reactor

Network communication models include: Reactor, Proactor, Asynchronous, Completion Token, and acceptor-connector

The reactor design pattern is an event handling pattern for handling service requests delivered concurrently to a service handler by one or more inputs. The service handler then demultiplexes the incoming requests and dispatches them synchronously to the associated request handlers

The REACTOR is an event-driven communication model for processing multiple requests that access the server simultaneously. The server analyzes the requests to determine what type of requests they are and distributes them to the appropriate Event Handler for processing

That’s how you use Selector event-driven in NIO above