preface
Design patterns are typical solutions to common problems in software design, and you can customize them to solve specific design problems in your code.
There are many explanations of design patterns on the web. But most of them are Demo examples, and you may not know how to use them after watching them.
In this article, I’ll start with design patterns and take a look at where different design patterns can be applied in a good Java framework/middleware product.
Singleton mode
The singleton pattern is one of the simplest design patterns in Java and provides an optimal way to create objects. This pattern involves a single class that is responsible for creating its own objects while ensuring that only a single object is created. This class provides a way to access its unique objects directly, without instantiating the objects of the class.
The singleton pattern is very simple, but it has a lot of tricks, let’s look at them one by one.
1. Hungry
Hungry Han style, as the name implies, means I am hungry and impatient. Whether anyone uses it or not, I’ll create it first.
For example, this code in Dubbo creates a configuration manager.
public class ConfigManager { private static final ConfigManager configManager = new ConfigManager(); private ConfigManager() {} public static ConfigManager getInstance() { return configManager; }}Copy the code
Or in RocketMQ, when creating an MQ client instance.
public class MQClientManager { private static MQClientManager instance = new MQClientManager(); private MQClientManager() {} public static MQClientManager getInstance() { return instance; }}Copy the code
2. Slacker style
Slacker is the opposite of hungry. It is designed to be initialized only on the first call to avoid memory waste. However, for thread safety and performance, double-checked locks are typically created.
Let’s look at the Seata framework and create a configuration class this way.
public class ConfigurationFactory{ private static volatile Configuration CONFIG_INSTANCE = null; public static Configuration getInstance() { if (CONFIG_INSTANCE == null) { synchronized (Configuration.class) { if (CONFIG_INSTANCE == null) { CONFIG_INSTANCE = buildConfiguration(); } } } return CONFIG_INSTANCE; }}Copy the code
3. Static inner classes
As you can see, creating a singleton by double-checking the lock is complicated. The lock, the judge twice, and the volatile.
Using a static inner class can achieve the same effect as a double-checked lock, but is simpler to implement.
In the Seata framework, the RM event handler is created using static inner classes to create singleton objects.
public class DefaultRMHandler extends AbstractRMHandler{ protected DefaultRMHandler() { initRMHandlers(); } private static class SingletonHolder { private static AbstractRMHandler INSTANCE = new DefaultRMHandler(); } public static AbstractRMHandler get() { return DefaultRMHandler.SingletonHolder.INSTANCE; }}Copy the code
There are also ways to create singletons through enumeration, but this is not widely used, at least not in common open source frameworks, so I won’t list them.
Some people say that hungry-style singleton mode is not good, can not do lazy load, waste memory. But I think it’s too critical, and in fact it’s the most common approach used in many open source frameworks.
If you definitely want lazy loading, consider using static inner classes. If you have other special requirements, such as creating objects that are cumbersome, you can use double-checked locks.
Ii. Factory model
The factory pattern is one of the most commonly used design patterns in Java. This type of design pattern is the creation pattern, which provides the best way to create objects.
Simply put, in the factory pattern, it is a pattern that instantiates concrete classes instead of new.
1. Simple factories
A simple factory, indeed, is relatively simple. It simply puts the creation of objects into a factory class and creates different objects with parameters.
In Seata, a distributed transaction framework, a two-phase rollback is required if an exception occurs.
The process is to find the undoLog record by the transaction ID, and then parse the data to generate THE SQL, which is executed in the first phase of the undo SQL.
The problem is that types of SQL include INSERT, UPDATE, and DELETE, so they parse differently and require different executors to parse.
In Seata, there is an abstract undo executor that generates an SQL.
Public Abstract class AbstractUndoExecutor{// Generate undo SQL protected abstract String buildUndoSQL(); }Copy the code
Then there is a factory that gets the undo executor, creates different types of executors based on the type of SQL, and returns them.
public class UndoExecutorFactory { public static AbstractUndoExecutor getUndoExecutor(String dbType, SQLUndoLog sqlUndoLog) { switch (sqlUndoLog.getSqlType()) { case INSERT: return new MySQLUndoInsertExecutor(sqlUndoLog); case UPDATE: return new MySQLUndoUpdateExecutor(sqlUndoLog); case DELETE: return new MySQLUndoDeleteExecutor(sqlUndoLog); default: throw new ShouldNeverHappenException(); }}}Copy the code
When used, the actuator is obtained directly from the factory class.
AbstractUndoExecutor undoExecutor = UndoExecutorFactory.getUndoExecutor(dataSourceProxy.getDbType(),sqlUndoLog);
undoExecutor.executeOn(conn);
Copy the code
We won’t go over the advantages of the simple factory model. But it has one small drawback:
Once you have a new implementation class, you need to modify the factory implementation, which can make the factory logic too complex to extend and maintain the system.
2. Factory method
The factory method pattern solves that problem. It can create a factory interface and multiple factory implementation classes so that if you add new functionality, you can simply add the new factory class without modifying the previous code.
In addition, the factory method pattern can be combined with the template method pattern to extract their common base logic into the parent class, leaving the rest to be implemented by subclasses.
In Dubbo, there is a design for caching that perfectly embodies the factory method pattern + template method pattern.
First, there is an interface to the cache, which provides methods to set the cache and get the cache.
public interface Cache {
void put(Object key, Object value);
Object get(Object key);
}
Copy the code
And then there’s a cache factory, which returns an implementation of the cache.
public interface CacheFactory {
Cache getCache(URL url, Invocation invocation);
}
Copy the code
In conjunction with the template method pattern, Dubbo created an abstract cache factory class that implements the interface to the cache factory.
Public abstract class AbstractCacheFactory implements CacheFactory {private final ConcurrentMap<String, Cache> caches = new ConcurrentHashMap<String, Cache>(); @Override public Cache getCache(URL url, Invocation invocation) { url = url.addParameter(Constants.METHOD_KEY, invocation.getMethodName()); String key = url.toFullString(); Cache cache = caches.get(key); If (cache == null) {// createCache implementation class, give subclass to implement caches. Put (key, createCache(url)); cache = caches.get(key); } return cache; } // Implement protected abstract Cache createCache(URL URL); }Copy the code
The common logic here is to create the cache implementation class with getCahce(), and it’s up to the subclass to decide what kind of cache implementation class to create.
So, each subclass is a specific cache factory class, such as:
ExpiringCacheFactory, JCacheFactory, LruCacheFactory, ThreadLocalCacheFactory.
These factory classes have only one method, which is to create a concrete cache implementation class.
public class ThreadLocalCacheFactory extends AbstractCacheFactory { @Override protected Cache createCache(URL url) { return new ThreadLocalCache(url); }}Copy the code
ThreadLocalCache is the implementation class for caching, such as using ThreadLocal to implement caching.
public class ThreadLocalCache implements Cache { private final ThreadLocal<Map<Object, Object>> store; public ThreadLocalCache(URL url) { this.store = new ThreadLocal<Map<Object, Object>>() { @Override protected Map<Object, Object> initialValue() { return new HashMap<Object, Object>(); }}; } @Override public void put(Object key, Object value) { store.get().put(key, value); } @Override public Object get(Object key) { return store.get().get(key); }}Copy the code
When used by the client, the cache object is retrieved from the factory.
public static void main(String[] args) {
URL url = URL.valueOf("http://localhost:8080/cache=jacache&.cache.write.expire=1");
Invocation invocation = new RpcInvocation();
CacheFactory cacheFactory = new ThreadLocalCacheFactory();
Cache cache = cacheFactory.getCache(url, invocation);
cache.put("java","java");
System.out.println(cache.get("java"));
}
Copy the code
There are two advantages to this.
First, if you add a new cache implementation, you just add a new cache factory class and nothing else changes.
Second, through the template method pattern, encapsulate the invariant part, extend the variable part. Extract common code for easy maintenance.
In addition, in Dubbo, registry retrieval is also done through the factory method.
3. Abstract factories
Abstract factory pattern, which creates a series of related objects without specifying their concrete classes.
The biggest differences between the factory method pattern and the abstract factory pattern are:
- The factory method pattern has only one abstract product class, and the concrete factory class can create only one instance of the concrete product class.
- The Abstract factory pattern has multiple abstract product classes, and a concrete factory class can create instances of multiple concrete product classes.
Let’s continue with the cached example above.
If we now have a data accessor that needs to work with caches and databases at the same time, we need multiple abstract products and multiple concrete product implementations.
Now that the cache-related product classes are in place, let’s move on to creating the database-related product implementation.
First, there is a database interface, which is an abstract product class.
public interface DataBase {
void insert(Object tableName, Object record);
Object select(Object tableName);
}
Copy the code
Next, we create two concrete product classes, MysqlDataBase and OracleDataBase.
public class MysqlDataBase implements DataBase{ Map<Object,Object> mysqlDb = new HashMap<>(); @Override public void insert(Object tableName, Object record) { mysqlDb.put(tableName,record); } @Override public Object select(Object tableName) { return mysqlDb.get(tableName); } } public class OracleDataBase implements DataBase { Map<Object,Object> oracleDb = new HashMap<>(); @Override public void insert(Object tableName, Object record) { oracleDb.put(tableName,record); } @Override public Object select(Object tableName) { return oracleDb.get(tableName); }}Copy the code
Second, create an abstract factory class that returns a cache object and a database object.
public interface DataAccessFactory {
Cache getCache(URL url);
DataBase getDb();
}
Copy the code
Finally, there are specific factory classes, which can arbitrarily combine each specific product according to the actual needs.
For example, we need a ThreadLocal based cache implementation and a Mysql based database object.
public class DataAccessFactory1 implements DataAccessFactory { @Override public Cache getCache(URL url) { return new ThreadLocalCache(url); } @Override public DataBase getDb() { return new MysqlDataBase(); }}Copy the code
If you need an LRU-based cache implementation and an Oracle-based database object.
public class DataAccessFactory2 implements DataAccessFactory { @Override public Cache getCache(URL url) { return new LruCache(url); } @Override public DataBase getDb() { return new OracleDataBase(); }}Copy the code
As you can see, the abstract factory pattern isolates the generation of concrete classes so that customers do not need to know what is being created. Because of this isolation, it is relatively easy to replace a concrete factory, all of which implement the common interfaces defined in the abstract factory, so that simply changing an instance of the concrete factory can change the behavior of the entire software system to some extent.
Third, template method mode
In the template pattern, an abstract class exposes a method/template that executes its methods. Its subclasses can override method implementations as needed, but the calls will be made in the manner defined in the abstract class.
Simply put, there are methods that are common to multiple subclasses and have the same logic that can be considered as template methods.
In the Dubbo cache example above, we have seen the template method pattern in action. But that is combined with the factory method pattern, let’s look at the application of template method pattern separately.
We know that when there are multiple service providers in our Dubbo application, the service consumer needs to choose one of them to invoke through a load-balancing algorithm.
First, there is a LoadBalance interface that returns an Invoker.
public interface LoadBalance {
<T> Invoker<T> select(List<Invoker<T>> invokers, URL url, Invocation invocation) throws RpcException;
}
Copy the code
Then define an abstract class, AbstractLoadBalance, that implements the LoadBalance interface.
public abstract class AbstractLoadBalance implements LoadBalance { @Override public <T> Invoker<T> select(List<Invoker<T>> invokers, URL url, Invocation invocation) { if (invokers == null || invokers.isEmpty()) { return null; } if (invokers.size() == 1) { return invokers.get(0); } return doSelect(invokers, url, invocation); } // Abstract method, The subclass selects an Invoker protected abstract <T> Invoker<T> doSelect(List<Invoker<T>> Invokers, URL URL, Invocation); }Copy the code
The common logic here is two judgments that determine whether the collection of Invokers is empty or whether there is only one instance. If so, you don’t need to call the subclass and just return it.
There are four specific load balancing implementations:
- RandomLoadBalance based on weighted random algorithm
- Leastactive Veload Balance based on the least number of active calls algorithm
- Based on the hash ConsistentHashLoadBalance consistency
- RoundRobinLoadBalance based on weighted polling algorithm
public class RandomLoadBalance extends AbstractLoadBalance { @Override protected <T> Invoker<T> DoSelect (List<Invoker<T>> Invokers, URL URL, Invocation) {// omit logic.... return invokers.get(ThreadLocalRandom.current().nextInt(length)); }}Copy the code
They are implemented according to different algorithms to return a concrete Invoker object.
Constructor pattern
The constructor pattern uses multiple simple objects to build a complex object step by step. This type of design pattern is the creation pattern, which provides the best way to create objects.
This pattern is commonly seen in building a complex object that may contain some business logic, such as checking, property transformation, etc. If you set it all up manually on the client side, there is a lot of redundant code. At this point, we can consider using the constructor pattern.
For example, in Mybatis, the creation of MappedStatement uses the constructor pattern.
As we know, each SQL tag in the XML file generates an MappedStatement object, which contains a number of attributes, as well as the object we will construct.
public final class MappedStatement { private String resource; private Configuration configuration; private String id; private SqlSource sqlSource; private ParameterMap parameterMap; private List<ResultMap> resultMaps; / /... Omit most of the attributes}Copy the code
Then there is an inner class Builder, which is responsible for completing the construction of the MappedStatement object.
First, the Builder class completes the partial construction of the MappedStatement object with its default constructor.
public static class Builder { private MappedStatement mappedStatement = new MappedStatement(); public Builder(Configuration configuration, String id, SqlSource sqlSource, SqlCommandType sqlCommandType) { mappedStatement.configuration = configuration; mappedStatement.id = id; mappedStatement.sqlSource = sqlSource; mappedStatement.statementType = StatementType.PREPARED; mappedStatement.resultSetType = ResultSetType.DEFAULT; / /... Omit most of the process}}Copy the code
You can then set specific properties and return the Builder class through a series of methods that are suitable for handling some business logic.
public static class Builder { public Builder parameterMap(ParameterMap parameterMap) { mappedStatement.parameterMap = parameterMap; return this; } public Builder resultMaps(List<ResultMap> resultMaps) { mappedStatement.resultMaps = resultMaps; for (ResultMap resultMap : resultMaps) { mappedStatement.hasNestedResultMaps = mappedStatement.hasNestedResultMaps || resultMap.hasNestedResultMaps(); } return this; } public Builder statementType(StatementType statementType) { mappedStatement.statementType = statementType; return this; } public Builder resultSetType(ResultSetType resultSetType) { mappedStatement.resultSetType = resultSetType == null ? ResultSetType.DEFAULT : resultSetType; return this; }}Copy the code
Finally, provide a build method that returns the completed object.
public MappedStatement build() { assert mappedStatement.configuration ! = null; assert mappedStatement.id ! = null; assert mappedStatement.sqlSource ! = null; assert mappedStatement.lang ! = null; mappedStatement.resultMaps = Collections.unmodifiableList(mappedStatement.resultMaps); return mappedStatement; }Copy the code
When used on the client side, we create a Builder, then chain call a bunch of methods, and finally call build() method, and we have the object we need. This is the application of the constructor pattern.
MappedStatement.Builder statementBuilder = new MappedStatement.Builder(configuration, id, sqlSource, sqlCommandType) .resource(resource) .fetchSize(fetchSize) .timeout(timeout) .statementType(statementType) .keyGenerator(keyGenerator) .keyProperty(keyProperty) .keyColumn(keyColumn) .databaseId(databaseId) .lang(lang) .resultOrdered(resultOrdered) .resultSets(resultSets) .resultMaps(getStatementResultMaps(resultMap, resultType, id)) .resultSetType(resultSetType) .flushCacheRequired(valueOrDefault(flushCache, ! isSelect)) .useCache(valueOrDefault(useCache, isSelect)) .cache(currentCache); ParameterMap statementParameterMap = getStatementParameterMap(parameterMap, parameterType, id); MappedStatement statement = statementBuilder.build(); configuration.addMappedStatement(statement); return statement;Copy the code
5. Adapter mode
The adapter pattern acts as a bridge between two incompatible interfaces. This type of design pattern is structural in that it combines the functionality of two separate interfaces.
The adapter pattern is typically used to mask interactions between business logic and third-party services, or differences between old and new interfaces.
We know that in Dubbo, all the data is transferred through Netty, and then it comes down to message codec.
So, first of all, it has a codec interface that encodes and decodes.
@SPI
public interface Codec2 {
@Adaptive({Constants.CODEC_KEY})
void encode(Channel channel, ChannelBuffer buffer, Object message) throws IOException;
@Adaptive({Constants.CODEC_KEY})
Object decode(Channel channel, ChannelBuffer buffer) throws IOException;
enum DecodeResult {
NEED_MORE_INPUT, SKIP_SOME_INPUT
}
}
Copy the code
Then, there are several implementation classes, such as DubboCountCodec, DubboCodec, ExchangeCodec, etc.
However, when we open these classes, we will find that they are normal Dubbo classes, only implements the Codec2 interface, in fact, does not directly use Netty codecs.
This is because Netty codecs need to implement the ChannelHandler interface in order to be declared as a Netty processing component. Like MessageToByteEncoder, ByteToMessageDecoder.
Because of this, Dubbo built an adapter specifically for the codec interface.
final public class NettyCodecAdapter { private final ChannelHandler encoder = new InternalEncoder(); private final ChannelHandler decoder = new InternalDecoder(); private final Codec2 codec; private final URL url; private final org.apache.dubbo.remoting.ChannelHandler handler; public NettyCodecAdapter(Codec2 codec, URL url, org.apache.dubbo.remoting.ChannelHandler handler) { this.codec = codec; this.url = url; this.handler = handler; } public ChannelHandler getEncoder() { return encoder; } public ChannelHandler getDecoder() { return decoder; } private class InternalEncoder extends MessageToByteEncoder { @Override protected void encode(ChannelHandlerContext ctx, Object msg, ByteBuf out) throws Exception { org.apache.dubbo.remoting.buffer.ChannelBuffer buffer = new NettyBackedChannelBuffer(out); Channel ch = ctx.channel(); NettyChannel channel = NettyChannel.getOrAddChannel(ch, url, handler); codec.encode(channel, buffer, msg); } } private class InternalDecoder extends ByteToMessageDecoder { @Override protected void decode(ChannelHandlerContext ctx, ByteBuf input, List<Object> out) throws Exception { ChannelBuffer message = new NettyBackedChannelBuffer(input); NettyChannel channel = NettyChannel.getOrAddChannel(ctx.channel(), url, handler); // Decode object codec.decode(channel, message); // omit some code... }}}Copy the code
In the code above, we see that the NettyCodecAdapter class ADAPTS to the Codec2 interface, passes the implementation class through the constructor, and then defines the internal encoder implementation and decoder implementation, both of which are ChannelHandlers.
In this case, the encoding and decoding logic in the inner class actually calls the Codec2 interface.
Finally, let’s look at how the adapter is called.
// Get the codec implementation class by SPI, Such as this is DubboCountCodec Codec2 codec = ExtensionLoader. GetExtensionLoader (Codec2. Class). GetExtension (" dubbo "); URL url = new URL("dubbo", "localhost", 22226); NettyCodecAdapter adapter = new NettyCodecAdapter(codec, url, nettyClient.this); Ch.pipeline ().addLast("decoder", adapter.getdecoder ()).addlast ("encoder", adapter.getEncoder())Copy the code
That is the application of codecs to adapter patterns in Dubbo.
6. Responsibility chain model
The chain of responsibility pattern creates a chain of recipient objects for the request. Allows you to send requests down the handler chain. Upon receipt of the request, each handler can either process it or pass it on to the next handler on the chain.
Let’s look at an example from Netty. We know that in Netty, when a server processes a message, one or more ChannelHandlers are added. Then, ChannelPipeline is carrying these channelhandlers, and its implementation process reflects the application of chain of responsibility mode.
ServerBootstrap serverBootstrap = new ServerBootstrap(); serverBootstrap.childHandler(new ChannelInitializer<NioSocketChannel>() { protected void initChannel(NioSocketChannel channel) { channel.pipeline() .addLast(new ChannelHandler1()) .addLast(new ChannelHandler2()) .addLast(new ChannelHandler3()); }});Copy the code
In Netty’s framework, each connection corresponds to a Channel, and each newly created Channel will be assigned a new Channel pipeline.
ChannelPipeline stores the ChannelHandlerContext object, which is the channelrelated context object and contains the handler ChannelHandler.
IO events will be handled by ChannelInboundHandler or ChannelOutboundHandler, depending on the origin of the event. It is then forwarded to the next ChannelHandler of the same supertype by calling the ChannelHandlerContext implementation.
1, ChannelHandler
First, let’s look at the responsible handler interface, the ChannelHandler in Netty, which acts as a container for all the application logic that handles inbound and outbound data.
Public interface ChannelHandler {// Void handlerAdded(ChannelHandlerContext) is called when ChannelHandler is added to ChannelPipeline ctx) throws Exception; Void handlerRemoved(ChannelHandlerContext CTX) throws Exception; Void exceptionCaught(ChannelHandlerContext CTX, Throwable Cause) throws Exception when an error occurs in the ChannelPipeline; }Copy the code
Netty then defines the following two important ChannelHandler subinterfaces:
- ChannelInboundHandler, which handles inbound data and various state changes;
Public Interface ChannelInboundHandler extends ChannelHandler {// Void is called when a Channel has registered with its EventLoop and can handle I/O channelRegistered(ChannelHandlerContext ctx) throws Exception; Void channelUnregistered(ChannelHandlerContext CTX) throws Exception when a Channel unregisters from its EventLoop and is unable to process any I/O; // Called when a Channel is active; A Channel is connected/bound and ready. Void channelActive(ChannelHandlerContext CTX) throws Exception; Void channelInactive(ChannelHandlerContext CTX) throws Exception; void channelInactive(ChannelHandlerContext CTX) throws Exception; Void channelRead(ChannelHandlerContext CTX, Object MSG) throws Exception when reading data from a Channel. Void channelReadComplete(ChannelHandlerContext CTX) throws Exception; void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception; void channelWritabilityChanged(ChannelHandlerContext ctx) throws Exception; void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception; }Copy the code
- ChannelOutboundHandler, which handles outbound data and allows all operations to be intercepted;
Public Interface ChannelOutboundHandler extends ChannelHandler {// Void is called when a request binds a Channel to a local address bind(ChannelHandlerContext ctx, SocketAddress localAddress, ChannelPromise promise) throws Exception; Void Connect (ChannelHandlerContext CTX, ChannelHandlerContext CTX, SocketAddress remoteAddress,SocketAddress localAddress, ChannelPromise promise) throws Exception; Void Disconnect (ChannelHandlerContext CTX, ChannelPromise Promise) throws Exception; void Disconnect (ChannelHandlerContext CTX, ChannelPromise promise) throws Exception; Void Close (ChannelHandlerContext CTX, ChannelPromise Promise) throws Exception; Void deregister(ChannelHandlerContext CTX, ChannelHandlerContext CTX, ChannelHandlerContext CTX, ChannelPromise promise) throws Exception; Void read(ChannelHandlerContext CTX) throws Exception; // Called when the request reads more data from a Channel. Void write(ChannelHandlerContext CTX, Object MSG, ChannelHandlerContext CTX, Object MSG, ChannelPromise promise) throws Exception; Void Flush (ChannelHandlerContext CTX) throws Exception; // Call void Flush (ChannelHandlerContext CTX) throws Exception; }Copy the code
2, ChannelPipeline
Since it is called the responsibility chain mode, it is necessary to have a “chain”, in Netty is the ChannelPipeline.
The ChannelPipeline provides a container for the ChannelHandler chain and defines methods for propagating inbound and outbound event streams on the chain, as well as the ability to add and remove responsible handler interfaces.
public interface ChannelPipeline{ ChannelPipeline addFirst(String name, ChannelHandler handler); ChannelPipeline addLast(String name, ChannelHandler handler); ChannelPipeline addBefore(String baseName, String name, ChannelHandler handler); ChannelPipeline addAfter(String baseName, String name, ChannelHandler handler); ChannelPipeline remove(ChannelHandler handler); ChannelHandler replace(String oldName, String newName, ChannelHandler newHandler); @Override ChannelPipeline fireChannelRegistered(); @Override ChannelPipeline fireChannelActive(); @Override ChannelPipeline fireExceptionCaught(Throwable cause); @Override ChannelPipeline fireUserEventTriggered(Object event); @Override ChannelPipeline fireChannelRead(Object msg); @Override ChannelPipeline flush(); // omit part of method..... }Copy the code
Then we look at the implementation. By default, there are two nodes, head and tail. And in the constructor, put them end to end. This is the standard chain structure.
public class DefaultChannelPipeline implements ChannelPipeline { final AbstractChannelHandlerContext head; final AbstractChannelHandlerContext tail; private final Channel channel; protected DefaultChannelPipeline(Channel channel) { this.channel = ObjectUtil.checkNotNull(channel, "channel"); tail = new TailContext(this); head = new HeadContext(this); head.next = tail; tail.prev = head; }}Copy the code
When a new ChannelHandler is added, it is encapsulated as a ChannelHandlerContext object and then inserted into the linked list.
private void addLast0(AbstractChannelHandlerContext newCtx) {
AbstractChannelHandlerContext prev = tail.prev;
newCtx.prev = prev;
newCtx.next = tail;
prev.next = newCtx;
tail.prev = newCtx;
}
Copy the code
3, ChannelHandlerContext
ChannelHandlerContext represents the association between ChannelHandler and ChannelPipeline. Whenever a ChannelHandler is added to a ChannelPipeline, Creates a ChannelHandlerContext.
The main function of the ChannelHandlerContext is to manage the interaction between the ChannelHandler it is associated with and other ChannelHandlers in the same ChannelPipeline.
public interface ChannelHandlerContext{ Channel channel(); EventExecutor executor(); ChannelHandler handler(); ChannelPipeline pipeline(); @Override ChannelHandlerContext fireChannelRegistered(); @Override ChannelHandlerContext fireChannelUnregistered(); @Override ChannelHandlerContext fireChannelActive(); @Override ChannelHandlerContext fireChannelRead(Object msg); @Override ChannelHandlerContext read(); @Override ChannelHandlerContext flush(); // omit part of method... }Copy the code
ChannelHandlerContext is responsible for propagating events from the responsible handler interface on the chain.
It has two important methods to find processors of type Inbound and type Outbound.
Note that if an inbound event is fired, it will be propagated from the head of the ChannelPipeline to the end of the ChannelPipeline; An outbound event will start at the far right of the ChannelPipeline and propagate to the left.
abstract class AbstractChannelHandlerContext implements ChannelHandlerContext, ResourceLeakHint { volatile AbstractChannelHandlerContext next; volatile AbstractChannelHandlerContext prev; // Find the next Inbound processor, Right > left private AbstractChannelHandlerContext findContextInbound (int mask) {AbstractChannelHandlerContext CTX = this; EventExecutor currentExecutor = executor(); do { ctx = ctx.next; } while (skipContext(ctx, currentExecutor, mask, MASK_ONLY_INBOUND)); return ctx; } // Find the next Outbound type handler, Right > left private AbstractChannelHandlerContext findContextOutbound (int mask) {AbstractChannelHandlerContext CTX = this; EventExecutor currentExecutor = executor(); do { ctx = ctx.prev; } while (skipContext(ctx, currentExecutor, mask, MASK_ONLY_OUTBOUND)); return ctx; }}Copy the code
4. Processing process
When we send a message to the server, the read method is triggered.
public abstract class AbstractNioByteChannel extends AbstractNioChannel { public final void read() { Final ChannelPipeline = pipeline(); // Data carrier ByteBuf ByteBuf = allochandle.allocate (allocator); Pipeline. FireChannelRead (byteBuf); }}Copy the code
In the above code, the ChannelPipeline is called, which starts at the Head node and calls the processor in turn based on the context object.
public class DefaultChannelPipeline implements ChannelPipeline { public final ChannelPipeline fireChannelRead(Object msg) { AbstractChannelHandlerContext.invokeChannelRead(head, msg); return this; }}Copy the code
Because the first node is the default HeadContext, it starts with the ChannelHandlerContext.
abstract class AbstractChannelHandlerContext implements ChannelHandlerContext, Public ChannelHandlerContext fireChannelRead(final Object MSG) {// Find the next ChannelHandler and execute the public ChannelHandlerContext fireChannelRead(final Object MSG) { invokeChannelRead(findContextInbound(MASK_CHANNEL_READ), msg); return this; }}Copy the code
And then in our custom ChannelHandler, it’s called.
public class ChannelHandler1 extends ChannelInboundHandlerAdapter { public void channelRead(ChannelHandlerContext ctx, Object msg){ System.out.println("ChannelHandler1:"+msg); ctx.fireChannelRead(msg); }}Copy the code
If the message has more than one ChannelHandler, you are free to continue passing the request down.
For example, if you think the message has been processed and should not be called down further, put ctx.firechannelRead (MSG) above; Comment it out and you end the chain of responsibility.
7. Strategic mode
The pattern defines a series of algorithms and encapsulates each algorithm so that they are interchangeable, and changes to the algorithm do not affect customers using the algorithm.
The policy pattern is a very common and useful design pattern if your business code has a lot of if… Else, then you can consider whether you can modify it using the policy pattern.
RocketMQ is an excellent distributed messaging middleware that we are all familiar with. Message-oriented middleware, in simple terms, is when a client sends a message, the server stores it and provides it to consumers for consumption.
There are many different types of request messages, and the processing process is definitely different. In RocketMQ, it registers all the processors and then has the corresponding processor process the request based on the code of the request message, which is the application of the policy pattern.
First, they need to implement the same interface, in this case the request handler interface.
public interface NettyRequestProcessor {
RemotingCommand processRequest(ChannelHandlerContext ctx, RemotingCommand request)throws Exception;
boolean rejectRequest();
}
Copy the code
This interface does only one thing, which is to handle requests from clients. Different types of requests are encapsulated into different RemotingCommand objects.
RocketMQ has over 90 request types, all separated by code in the RequestCode.
Then, define a set of policy classes. Let’s look at a few.
// Default message handler public class DefaultRequestProcessor implements NettyRequestProcessor { SendMessageProcessor extends AbstractSendMessageProcessor implements NettyRequestProcessor {} / / pull message processor public class PullMessageProcessor implements NettyRequestProcessor {} public class QueryMessageProcessor implements a query message Public class ConsumerManageProcessor implements NettyRequestProcessor {}Copy the code
Next, encapsulate these policy classes. In RocketMQ, these processors are registered when the Broker server is started.
public class BrokerController { public void registerProcessor() { SendMessageProcessor sendProcessor = new SendMessageProcessor(this); this.remotingServer.registerProcessor(RequestCode.SEND_MESSAGE, sendProcessor, this.sendMessageExecutor); this.remotingServer.registerProcessor(RequestCode.SEND_MESSAGE_V2, sendProcessor, this.sendMessageExecutor); this.remotingServer.registerProcessor(RequestCode.PULL_MESSAGE,this.pullMessageProcessor,this.pullMessageExecutor); this.remotingServer.registerProcessor(RequestCode.SEND_REPLY_MESSAGE, replyMessageProcessor, replyMessageExecutor); this.remotingServer.registerProcessor(RequestCode.SEND_REPLY_MESSAGE_V2, replyMessageProcessor, replyMessageExecutor); NettyRequestProcessor queryProcessor = new QueryMessageProcessor(this); this.remotingServer.registerProcessor(RequestCode.QUERY_MESSAGE, queryProcessor, this.queryMessageExecutor); this.remotingServer.registerProcessor(RequestCode.VIEW_MESSAGE_BY_ID, queryProcessor, this.queryMessageExecutor); ClientManageProcessor clientProcessor = new ClientManageProcessor(this); this.remotingServer.registerProcessor(RequestCode.HEART_BEAT, clientProcessor, this.heartbeatExecutor); this.remotingServer.registerProcessor(RequestCode.UNREGISTER_CLIENT, clientProcessor, this.clientManageExecutor); // Omit part of the registration process..... }}Copy the code
Finally, after Netty receives the request from the client, it finds the corresponding policy class according to the message type to process the message.
public abstract class NettyRemotingAbstract { public void processRequestCommand(final ChannelHandlerContext ctx, Final RemotingCommand CMD) {final Pair<NettyRequestProcessor, ExecutorService> matched = this.processorTable.get(cmd.getCode()); // Final <NettyRequestProcessor, ExecutorService> Pair = null == matched? this.defaultRequestProcessor : matched; Final RemotingCommand response = pain.getobject1 ().processRequest(CTX, CMD); // Omit most of the code...... }}Copy the code
If a new request message type is available, RocketMQ doesn’t have to change the business code, just add the policy class and register it.
8. Agent mode
Proxy mode, which provides a proxy for other objects to control access to that object.
In some open source frameworks or middleware products, the proxy pattern can be very common. The easier it is for us to use, the more complex the framework is likely to do for us behind the scenes. This is often reflected in the application of agent mode, quite a taste of graft.
1, Dubbo
As an RPC framework, Dubbo has one of the most important functions:
Providing high performance proxy-based remote invocation capabilities, services are interface-grained, shielding developers from remote invocation low-level details.
Here we focus on two key points:
- Interface oriented proxy;
- Mask the call low-level details.
For example, we have an inventory service that provides an interface to deduct inventory.
public interface StorageDubboService {
int decreaseStorage(StorageDTO storage);
}
Copy the code
In other services, it is easier to reference this interface through Dubbo when inventory reduction is needed.
@Reference
StorageDubboService storageDubboService;
Copy the code
It’s easy to use, but StorageDubboService is just a generic service class that doesn’t have the ability to call remotely.
Dubbo created proxy classes for these service classes. Create and return a proxy object via the ReferenceBean.
public class ReferenceBean<T>{ @Override public Object getObject() { return get(); } public synchronized T get() { if (ref == null) { init(); } return ref; }}Copy the code
When we use it, we actually call proxy objects that do complex remote calls. Such as connection registry, load balancing, cluster fault tolerance, connection server to send messages, and so on.
2, Mybatis
There is also a typical application, is we often use Mybatis. When we use it, we usually only operate Mapper interface, and then Mybatis will find the corresponding SQL statement to execute.
public interface UserMapper {
List<User> getUserList();
}
Copy the code
As shown above, UserMapper is just a common interface. How does it end up in our SQL statement?
The answer is also agency. When Mybatis scans the Mapper interface we defined, it sets it to MapperFactoryBean and creates a proxy object that returns it.
protected T newInstance(SqlSession sqlSession) {
final MapperProxy<T> mapperProxy = new MapperProxy<>(sqlSession, mapperInterface, methodCache);
return (T) Proxy.newProxyInstance(mapperInterface.getClassLoader(), new Class[] { mapperInterface }, mapperProxy);
}
Copy the code
The proxy object finds the MappedStatement object with the requested method name, calls the executor, parses the SqlSource object to generate the SQL, executes and parses the returned result, and so on.
The specific implementation process of the above cases will not be discussed in detail here. Interested in other articles by the author ~
Decorator mode
Decorator pattern, a pattern that dynamically adds some responsibility (that is, additional functionality) to an object without changing the structure of the existing object, is an object structure pattern.
Cache design in Mybatis is a typical application of decorator pattern.
First of all, we know that the MyBatis executer is the core of MyBatis scheduling, which is responsible for the generation and execution of SQL statements.
This executor is created when SqlSession is created, and the default executor is SimpleExecutor.
But to add caching responsibilities to the executor, you add CachingExecutor to SimpleExecutor.
The actual operations in CachingExecutor are still delegated to SimpleExecutor, with caching added before and after execution.
First, let’s take a look at the decoration process.
public Executor newExecutor(Transaction transaction, ExecutorType ExecutorType) {// Default ExecutorType == null? defaultExecutorType : executorType; executorType = executorType == null ? ExecutorType.SIMPLE : executorType; Executor executor; if (ExecutorType.BATCH == executorType) { executor = new BatchExecutor(this, transaction); } else if (ExecutorType.REUSE == executorType) { executor = new ReuseExecutor(this, transaction); } else { executor = new SimpleExecutor(this, transaction); } // Use cacheEnabled to decorate if (cacheEnabled) {executor = new CachingExecutor(executor); } executor = (Executor) interceptorChain.pluginAll(executor); return executor; }Copy the code
When SqlSession executes a method, CachingExecutor is called first, so let’s look at the query method.
public class CachingExecutor implements Executor { @Override public <E> List<E> query()throws SQLException { Cache cache = ms.getCache(); if (cache ! = null) { flushCacheIfRequired(ms); if (ms.isUseCache() && resultHandler == null) { List<E> list = (List<E>) tcm.getObject(cache, key); if (list == null) { list = delegate.query(ms, parameterObject, rowBounds, resultHandler, key, boundSql); tcm.putObject(cache, key, list); // issue #578 and #116 } return list; } } return delegate.query(ms, parameterObject, rowBounds, resultHandler, key, boundSql); }}Copy the code
This code, if caching is enabled, fetches results from the cache first. If caching is not enabled or there are no results in the cache, then the SimpleExecutor executor is called to query the database.
The observer model
The observer pattern defines a one-to-many dependency between objects in which all dependent objects are notified and automatically updated when an object’s state changes.
In Spring or SpringBoot projects, sometimes we need to do some system initialization after the Spring container is started and loaded. At this point, you can configure an observer ApplicationListener to do this. This is the observer model in action.
@Component public class ApplicationStartup implements ApplicationListener<ContextRefreshedEvent> { @Override public void OnApplicationEvent (ContextRefreshedEvent Event) {system.out.println (" Do some System initialization...." ); ApplicationContext context = event.getApplicationContext(); String[] names = context.getBeanDefinitionNames(); for (String beanName:names){ System.out.println("----------"+beanName+"---------"); }}}Copy the code
First, we know that ApplicationContext is the core container in Spring.
public abstract class AbstractApplicationContext extends DefaultResourceLoader implements ConfigurableApplicationContext Private final Set<ApplicationListener<? >> applicationListeners = new LinkedHashSet<>(); / / private observed ApplicationEventMulticaster ApplicationEventMulticaster; }Copy the code
When the ApplicationContext container is refreshed, an observed is initialized and registered with the Spring container.
Then, various observers are registered to the observed, forming a one-to-many dependency.
public abstract class AbstractApplicationContext{ protected void registerListeners() { for (ApplicationListener<? > listener : getApplicationListeners()) { getApplicationEventMulticaster().addApplicationListener(listener); } String[] listenerBeanNames = getBeanNamesForType(ApplicationListener.class, true, false); for (String listenerBeanName : listenerBeanNames) { getApplicationEventMulticaster().addApplicationListenerBean(listenerBeanName); } Set<ApplicationEvent> earlyEventsToProcess = this.earlyApplicationEvents; this.earlyApplicationEvents = null; if (earlyEventsToProcess ! = null) { for (ApplicationEvent earlyEvent : earlyEventsToProcess) { getApplicationEventMulticaster().multicastEvent(earlyEvent); }}}}Copy the code
At this time, our custom observer object has been registered by applicationEventMulticaster inside.
Finally, when ApplicationContext is refreshed, the ContextRefreshedEvent event is published.
protected void finishRefresh() {
publishEvent(new ContextRefreshedEvent(this));
}
Copy the code
Notify the observer, called ApplicationListener. OnApplicationEvent ().
private void doInvokeListener(ApplicationListener listener, ApplicationEvent event) {
listener.onApplicationEvent(event);
}
Copy the code
Let’s take a look at how this mechanism is applied in Dubbo.
The Dubbo service export process starts when the Spring container publishes a refresh event, and Dubbo immediately executes the service export logic upon receiving the event.
public class ServiceBean<T> extends ServiceConfig<T> implements InitializingBean, DisposableBean,
ApplicationContextAware, ApplicationListener<ContextRefreshedEvent>, BeanNameAware,
ApplicationEventPublisherAware {
@Override
public void onApplicationEvent(ContextRefreshedEvent event) {
if (!isExported() && !isUnexported()) {
if (logger.isInfoEnabled()) {
logger.info("The service ready on spring started. service: " + getInterface());
}
export();
}
}
}
Copy the code
As you can see, the ServiceBean in Dubbo also implements the ApplicationListener interface, which executes the export method after the Spring container publishes the refresh event. It is important to note that after Dubbo performed the export, it also published an event.
public class ServiceBean<T>{ public void export() { super.export(); publishExportEvent(); } private void publishExportEvent() { ServiceBeanExportedEvent exportEvent = new ServiceBeanExportedEvent(this); applicationEventPublisher.publishEvent(exportEvent); }}Copy the code
ServiceBeanExportedEvent, Export events, need to inherit the Event object ApplicationEvent in Spring.
public class ServiceBeanExportedEvent extends ApplicationEvent { public ServiceBeanExportedEvent(ServiceBean serviceBean) { super(serviceBean); } public ServiceBean getServiceBean() { return (ServiceBean) super.getSource(); }}Copy the code
We then define an ApplicationListener, or observer, to listen for the Dubbo service interface export event.
@Component public class ServiceBeanListener implements ApplicationListener<ServiceBeanExportedEvent> { @Override public void onApplicationEvent(ServiceBeanExportedEvent event) { ServiceBean serviceBean = event.getServiceBean(); String beanName = serviceBean.getBeanName(); Service service = serviceBean.getService(); System.out.println(beanName+":"+service); }}Copy the code
Command mode
Command pattern is a behavior design pattern that transforms a request into a single object that contains all the information associated with the request. This transformation allows you to parameterize methods, delay request execution, or queue them based on different requests, and implement undoable operations.
Hystrix is Netflix’s open source fault-tolerant framework with self-protection capabilities. Can stop the chain reaction of failures, quick failures and graceful downgrades.
It wraps all calls to external systems/dependencies in a HystrixCommand or HystrixObservableCommand, each command executed in a separate thread/under signal authorization. This is a typical application of the command pattern.
Let’s look at an example of a Hystrix application.
First, we need to create a concrete command class that passes the receiver object through the constructor.
Public class OrderServiceHystrixCommand extends HystrixCommand < Object > {/ / receiver, Private OrderService OrderService; public OrderServiceHystrixCommand(OrderService orderService) { super(setter()); this.orderService = orderService; } // Set Hystrix parameters public static Setter Setter () {HystrixCommandGroupKey groupKey = HystrixCommandGroupKey.Factory.asKey("orderGroup"); HystrixCommandKey commandKey = HystrixCommandKey.Factory.asKey("orderService"); HystrixThreadPoolProperties.Setter threadPoolProperties = HystrixThreadPoolProperties.Setter().withCoreSize(1) .withQueueSizeRejectionThreshold(1); HystrixCommandProperties.Setter commandProperties = HystrixCommandProperties.Setter(); return Setter.withGroupKey(groupKey) .andCommandKey(commandKey) .andThreadPoolPropertiesDefaults(threadPoolProperties) .andCommandPropertiesDefaults(commandProperties); } @Override protected Object run() throws InterruptedException { Thread.sleep(500); return orderService.orders(); } @Override protected Object getFallback() { System.out.println("-------------------------------"); return new ArrayList(); }}Copy the code
Then, when the client calls, create the command class and execute it.
@RestController public class OrderController { @Autowired OrderService orderService; @RequestMapping("/orders") public Object orders(){ OrderServiceHystrixCommand command = new OrderServiceHystrixCommand(orderService); return command.execute(); }}Copy the code
It seems that the command pattern and the policy pattern are somewhat similar in that they both parameterize objects through certain behaviors. But their ideas are very different.
For example, we can use commands to convert any operation into an object, and the parameters of the operation will become the member variables of the object. Similarly, we can do anything to the request, such as defer execution, log, save history commands, etc.
The strategy mode focuses on describing different ways of doing something, allowing you to switch algorithms within the same context class.
conclusion
This paper focuses on the implementation of design patterns in different frameworks in order to better understand the ideas behind patterns and application scenarios. Welcome to have different ideas of friends, message discussion ~
The original is not easy, the guest officers point a praise and then go, this will be the author’s motivation to continue writing ~