NIO
- Buffer classes and common methods (the most common are: ByteBuffer, CharBuffer, etc.)
- IO multiplexing
- Select, poll, epoll
- Selector, SelectableChannel, SelectionKey
Netty
The service side Demo
Bootstrap b = new ServerBootstrap(); EventLoopGroup bossLoopGroup = new NioEventLoopGroup(1); EventLoopGroup workerLoopGroup = new NioEventLoopGroup(); Try {// 1 Set the reactor thread group B. group(bossLoopGroup, workerLoopGroup); / / 2 nio set type of channel biggest hannel (NioServerSocketChannel. Class); // 3 Set the listening port b.localAddress(serverPort); // 4 set channel parameter b.o (channeloption. SO_KEEPALIVE, true); b.option(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT); b.childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT); // The client will create a channel protected void when a connection arrives InitChannel (SocketChannel CH) throws Exception {// Pipeline manages handlers in a subchannel channel // Adds a Handler Handler to the subchannel pipeline ch.pipeline().addLast(......) ; }}); ChannelFuture ChannelFuture = b.bind().sync(); ChannelFuture = b.bind(). Logger.info(" Server started successfully, listening port: "+ channelFuture.channel().localAddress()));Copy the code
The client Demo
Bootstrap b = new Bootstrap(); EventLoopGroup g = new NioEventLoopGroup(); try { b.group(g); b.channel(NioSocketChannel.class); b.option(ChannelOption.SO_KEEPALIVE, true); b.option(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT); b.remoteAddress(host, port); // Set channel initialization b.Handler (......) ; ChannelFuture f = b.connect(); . F.addlistener ((ChannelFuture Future) -> {final EventLoop EventLoop = future.channel().eventLoop(); if (! Future.issuccess ()) {// Failed to connect! Prepare to attempt reconnection after 10 seconds! eventLoop.schedule( () -> doConnect(), 10, TimeUnit.SECONDS); } else { channel = future.channel(); }});Copy the code
- BossGroup listens for I/O events that are newly connected to the server channel, and workerGroup processes I/O events that are connected to the transmission channel
- NioServerSocketChannel is responsible for the server listening and receiving, called the parent channel; Received NIo SocketChannels are called subchannels
- distributor
- PoolByteBufAllocator: pooled ByteBuf allocator
- UnPooledByteBufAllocator: common ButeBuf allocator that occupies memory
- ByteBuf buffer
- HeapByteBuf: JVM heap memory
- DirectByteBuf: Operating system physical memory,
Is the default netty buffer type
- ChildHandler is the pipeline for assembling child channels. The business processing of the parent channel is fixed: after accepting a new connection, the child channel is created, and then the child channel is initialized, so no special configuration is required
- Each time a client connects, a channel is created
ChannelInboundHandlerAdapter
Is an inbound handler that reads the message sent by the client and executes the channelRead method in order of addition, passing it on to the next subclass if super.channelRead(CTX, MSG) is calledChannelOutboundHandlerAdapter
Is an outbound processor, and when sending a client message, the subclasses execute write methods back to front. If super.write(CTX, MSG, promise) is called, the subclasses pass it on to the next subclass
- In Netty, the input and output, outbound and outbound operations of the network connection channel are performed asynchronously, returning an instance of the ChannelFuture interface to which you can add a listener for asynchronous callbacks. The callback is not executed until the asynchronous task has actually completed. The client Demo code demonstrates asynchronous callback listening at connection time.
Encoding and transport protocols
- ByteToMessageDecoder, a decoder abstract class, inherited
ChannelInboundHandlerAdapter
To turn ByteBuf into a Java object- Built-in decoder LineBasedFrameDecoder, DelimeterBasedFrameDecoder, LengthFieldBasedFrameDecoder, etc
- MessageToByteEncoder, an encoder abstract class, inherits
ChannelOutboundHandlerAdapter
To convert a Java object to ByteBuf - MessageToMessageEncoder, which encodes one Java object into another
- Sticky packet: The receiver receives a ByteBuf containing multiple sender ByteBuFs that are glued together
- Half packet: A ByteBuf received by a receiver is part of a ByteBuf received by a sender
- ProtoBuf: A data exchange format. ProtoBuf is a set of data transfer formats and specifications similar to JSON or XML, which can be understood as a serialization and deserialization scheme
ZooKeeper
Basic knowledge of
- To download, search For ZooKeeper on Baidu and download it from the Apache official website
- The directory structure
Log directory: /log Data directory: /data You need to create a myID file in the data directory to record the node ID. CFG file in the conf directory must be unique. Start the server: /bin/ zkserver. CMD Start the client: /bin/zkCli.cmd --server ip:portCopy the code
- The cluster must have an odd number of nodes, and at least half of them must be available
- When the number reaches more than half, one node will be selected as the Leader node and the others are followers nodes
- The election for the first time
- The Leader node hangs. A new election is held
- Internode synchronization
- Common commands
- View node ls
- Check the node value: get
- Create, modify, and delete nodes: create, set, delete, and RMR
- The node type
- Persistent node
- Persist sequential nodes
- Temporary node
- Temporary sequence node
- The storage model
Tree structure, a root node with “/” as the number, each circle is a node, called ZNode, each ZNode has a node value, as shown in the figure below
Java API class: Curator
< the dependency > < groupId > org. Apache. Zookeeper < / groupId > < artifactId > zookeeper < / artifactId > < version > 3.4.8 < / version > </dependency> <dependency> <groupId>org.apache.curator</groupId> <artifactId>curator-client</artifactId> < version > 4.0.0 < / version > < exclusions > < exclusion > < groupId > org. Apache. Zookeeper < / groupId > <artifactId>zookeeper</artifactId> </exclusion> </exclusions> </dependency> <dependency> < the groupId > org. Apache. Curator < / groupId > < artifactId > curator - framework < / artifactId > < version > 4.0.0 < / version > < exclusions > <exclusion> <groupId>org.apache.zookeeper</groupId> <artifactId>zookeeper</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.curator</groupId> <artifactId>curator-recipes</artifactId> The < version > 4.0.0 < / version > < / dependency >Copy the code
They are used
- Distributed Id (increment with sequential nodes)
- Distributed lock (with InterProcessMutex for Curator)
- Service listener (implemented using listeners, related classes include ZK native Watcher, NodeCache for Curator, PathChildrenCache, TreeCache)