Official account: Java Xiaokaxiu, website: Javaxks.com

Source: author: Alben ‘s home, albenw. Making. IO/posts/f6a7d…

In the hierarchical code architecture, objects between layers have to do a lot of conversion, assignment and other operations. These operations are repetitive and tedious, which gives rise to many tools to do this gracefully and efficiently, such as BeanUtils, BeanCopier, Dozer, Orika and so on. This article will describe the use of the above tools, performance comparison and principle analysis.

background

In the hierarchical code architecture, objects between layers have to do a lot of conversion, assignment and other operations. These operations are repetitive and tedious, which gives rise to many tools to do this gracefully and efficiently, such as BeanUtils, BeanCopier, Dozer, Orika and so on. This article will describe the use of the above tools, performance comparison and principle analysis.

Performance analysis

All of these tools are fairly simple to use and similar to each other, so I thought I’d give you a sense of what the performance analysis looks like. I use JMH to do the performance analysis, the code is as follows:

The object to be copied is relatively simple and contains some basic types; At Warmup, some tools need to be “pre-compiled” and cached so that the comparison is more objective. Copy 1000 objects, 10000 objects, 100000 objects respectively, these are more commonly used orders of magnitude.

@BenchmarkMode(Mode.AverageTime) @OutputTimeUnit(TimeUnit.MICROSECONDS) @Fork(1) @Warmup(iterations = 1) @State(Scope.Benchmark) public class BeanMapperBenchmark { @Param({"1000", "10000", "100000"}) private int times; private int time; private static MapperFactory mapperFactory; private static Mapper mapper; static { mapperFactory = new DefaultMapperFactory.Builder().build(); mapperFactory.classMap(SourceVO.class, TargetVO.class) .byDefault() .register(); mapper = DozerBeanMapperBuilder.create() .withMappingBuilder(new BeanMappingBuilder() { @Override protected void configure() { mapping(SourceVO.class, TargetVO.class) .fields("fullName", "name") .exclude("in"); } }).build(); } public static void main(String[] args) throws Exception { Options options = new OptionsBuilder() .include(BeanMapperBenchmark.class.getName()).measurementIterations(3) .build(); new Runner(options).run(); } @Setup public void prepare() { this.time = times; } @Benchmark public void springBeanUtilTest(){ SourceVO sourceVO = getSourceVO(); for(int i = 0; i < time; i++){ TargetVO targetVO = new TargetVO(); BeanUtils.copyProperties(sourceVO, targetVO); } } @Benchmark public void apacheBeanUtilTest() throws Exception{ SourceVO sourceVO = getSourceVO(); for(int i = 0; i < time; i++){ TargetVO targetVO = new TargetVO(); org.apache.commons.beanutils.BeanUtils.copyProperties(targetVO, sourceVO); } } @Benchmark public void beanCopierTest(){ SourceVO sourceVO = getSourceVO(); for(int i = 0; i < time; i++){ TargetVO targetVO = new TargetVO(); BeanCopier bc = BeanCopier.create(SourceVO.class, TargetVO.class, false); bc.copy(sourceVO, targetVO, null); } } @Benchmark public void dozerTest(){ SourceVO sourceVO = getSourceVO(); for(int i = 0; i < time; i++){ TargetVO map = mapper.map(sourceVO, TargetVO.class); } } @Benchmark public void orikaTest(){ SourceVO sourceVO = getSourceVO(); for(int i = 0; i < time; i++){ MapperFacade mapper = mapperFactory.getMapperFacade(); TargetVO map = mapper.map(sourceVO, TargetVO.class); } } private SourceVO getSourceVO(){ SourceVO sourceVO = new SourceVO(); sourceVO.setP1(1); sourceVO.setP2(2L); sourceVO.setP3(new Integer(3).byteValue()); sourceVO.setDate1(new Date()); sourceVO.setPattr1("1"); sourceVO.setIn(new SourceVO.Inner(1)); sourceVO.setFullName("alben"); return sourceVO; }}Copy the code

The result is as follows when running on my macbook:

[] (Albenw. Making. IO/images/Java…Analysis of the performance and principle of MAPper.

Score is the average running time in microseconds. In terms of execution efficiency, we can see beanCopier > Orika > springBeanUtil > Dozer > apacheBeanUtil. These results have a lot to do with how they work,

The use and implementation of each tool are detailed below.

Spring 的 BeanUtils

use

This tool is probably the one you use most everyday, because it comes with Spring and is easy to use: beanutils.copyProperties (sourceVO, targetVO);

The principle of

The implementation principle of Spring BeanUtils is also relatively simple. It is to get two propertydescriptors of classes through Java’s Introspector, and compare two properties that have the same name and type. If yes, Is assigned (ReadMethod, WriteMethod), otherwise ignored.

Spring caches BeanInfo and PropertyDescriptor to improve performance.

(source: based org. Springframework: spring – beans: v4.3.9. RELEASE)

/** * Copy the property values of the given source bean into the given target bean. * <p>Note: The source and target classes do not have to match or even be derived * from each other, as long as the properties match. Any bean properties that the * source bean exposes but the target bean does not will silently be ignored. * @param source the source bean * @param target the target bean * @param editable the class (or interface) to restrict property setting to * @param ignoreProperties array of property names to ignore * @throws BeansException if the copying failed * @see BeanWrapper */ private static void copyProperties(Object source, Object target, Class<? > editable, String... ignoreProperties) throws BeansException { Assert.notNull(source, "Source must not be null"); Assert.notNull(target, "Target must not be null"); Class<? > actualEditable = target.getClass(); if (editable ! = null) { if (! editable.isInstance(target)) { throw new IllegalArgumentException("Target class [" + target.getClass().getName() + "] not assignable to Editable class [" + editable.getName() + "]"); } actualEditable = editable; } // Get the properties of the Target class (with caching) PropertyDescriptor[] targetPds = getPropertyDescriptors(actualEditable); List<String> ignoreList = (ignoreProperties ! = null ? Arrays.asList(ignoreProperties) : null); for (PropertyDescriptor targetPd : targetPds) { Method writeMethod = targetPd.getWriteMethod(); if (writeMethod ! = null && (ignoreList == null || ! Contains (targetpd. getName()))) {// Gets properties of the Source class (cached) PropertyDescriptor sourcePd = getPropertyDescriptor(source.getClass(), targetPd.getName()); if (sourcePd ! = null) { Method readMethod = sourcePd.getReadMethod(); if (readMethod ! = null && / / judgment target setter method into the participation and the source of getter method return types are consistent ClassUtils. IsAssignable (writeMethod. GetParameterTypes () [0], readMethod.getReturnType())) { try { if (! Modifier.isPublic(readMethod.getDeclaringClass().getModifiers())) { readMethod.setAccessible(true); } // Invoke Object value = readmethod.invoke (source); if (! Modifier.isPublic(writeMethod.getDeclaringClass().getModifiers())) { writeMethod.setAccessible(true); } // Invoke target writemethod. invoke(target, value); } catch (Throwable ex) { throw new FatalBeanException( "Could not copy property '" + targetPd.getName() + "' from source  to target", ex); } } } } } }Copy the code

summary

Spring BeanUtils is such a compact implementation, which is why it performs better.

However, being too concise can lead to loss of flexibility and extensibility. Spring BeanUtils also has obvious limitations, requiring the same name and type of class attributes.

The Apache BeanUtils

use

Apache’s BeanUtils and Spring’s BeanUtils are used the same way:

BeanUtils.copyProperties(targetVO, sourceVO);
Copy the code

Note that source and target have different entry positions.

The principle of

The implementation principle of Apache’s BeanUtils is the same as that of Spring’s BeanUtils. It also obtains class attributes through Java’s Introspector mechanism to perform assignment operations. Caching for BeanInfo and PropertyDescriptor is also available, but Apache BeanUtils adds some less-used features (including support for Map types, support for custom DynaBean types, support for expressions for property names, and so on). Resulting in performance degradation relative to Spring’s BeanUtils.

Commons -beanutils: Commons -beanutils:1.9.3

public void copyProperties(final Object dest, final Object orig) throws IllegalAccessException, InvocationTargetException { if (dest == null) { throw new IllegalArgumentException ("No destination bean specified"); } if (orig == null) { throw new IllegalArgumentException("No origin bean specified"); } if (log.isDebugEnabled()) { log.debug("BeanUtils.copyProperties(" + dest + ", " + orig + ")"); } // Apache Common custom DynaBean if (orig instanceof DynaBean) {final DynaProperty[] origDescriptors = ((DynaBean) orig).getDynaClass().getDynaProperties(); for (DynaProperty origDescriptor : origDescriptors) { final String name = origDescriptor.getName(); // Need to check isReadable() for WrapDynaBean // (see Jira issue# BEANUTILS-61) if (getPropertyUtils().isReadable(orig,  name) && getPropertyUtils().isWriteable(dest, name)) { final Object value = ((DynaBean) orig).get(name); copyProperty(dest, name, value); } // Map type} else if (orig instanceof Map) {@suppressWarnings ("unchecked") final // Map properties are always of type  <String, Object> Map<String, Object> propMap = (Map<String, Object>) orig; for (final Map.Entry<String, Object> entry : propMap.entrySet()) { final String name = entry.getKey(); if (getPropertyUtils().isWriteable(dest, name)) { copyProperty(dest, name, entry.getValue()); }} // Standard JavaBean} else {final PropertyDescriptors [] origDescriptors = // Get propertyDescriptors getPropertyUtils().getPropertyDescriptors(orig); for (PropertyDescriptor origDescriptor : origDescriptors) { final String name = origDescriptor.getName(); if ("class".equals(name)) { continue; If (getPropertyUtils().isreadable (orig, name) && getPropertyUtils().isWriteable(dest, Final Object value = getPropertyUtils().getSimpleProperty(orig, name); CopyProperty (dest, name, value); } catch (final NoSuchMethodException e) { // Should not happen } } } } }Copy the code

summary

The Apache BeanUtils implementation is generally similar to Spring BeanUtils, but the performance is much lower, as can be seen from the performance comparison above. Ali’s Java specification is not recommended.

BeanCopier

use

BeanCopier is in the Cglib package, which is also relatively simple to use:

@Test
    public void beanCopierSimpleTest() {
        SourceVO sourceVO = getSourceVO();
        log.info("source={}", GsonUtil.toJson(sourceVO));
        TargetVO targetVO = new TargetVO();
        BeanCopier bc = BeanCopier.create(SourceVO.class, TargetVO.class, false);
        bc.copy(sourceVO, targetVO, null);
        log.info("target={}", GsonUtil.toJson(targetVO));
    }
Copy the code

All you need to do is define the source and target classes to convert, and you can choose whether to use Converter or not, as discussed below.

In the above performance tests, BeanCopier performed best of all, so let’s look at how it works.

The principle of

Instead of using reflection to assign values to attributes, BeanCopier directly uses Cglib to generate a class with get/set methods, which are then executed. Because bytecode execution is generated directly, BeanCopier performance is close to handwriting

Get/set.

BeanCopier. The create method

public static BeanCopier create(Class source, Class target, boolean useConverter) {
        Generator gen = new Generator();
        gen.setSource(source);
        gen.setTarget(target);
        gen.setUseConverter(useConverter);
        return gen.create();
    }

public BeanCopier create() {
            Object key = KEY_FACTORY.newInstance(source.getName(), target.getName(), useConverter);
            return (BeanCopier)super.create(key);
        }
Copy the code

Create a BeanCopier with KEY_FACTORY and call the create method to generate the bytecode.

KEY_FACTORY is a cglib class generated through the BeanCopierKey interface

private static final BeanCopierKey KEY_FACTORY =
      (BeanCopierKey)KeyFactory.create(BeanCopierKey.class);
      
interface BeanCopierKey {
        public Object newInstance(String source, String target, boolean useConverter);
    }
Copy the code

By setting the

System.setProperty(DebuggingClassWriter.DEBUG_LOCATION_PROPERTY, "path");
Copy the code

Cglib output generates a class file, which we can decompile to see the code inside

Here is the class for KEY_FACTORY

public class BeanCopier$BeanCopierKey$$KeyFactoryByCGLIB$$f32401fd extends KeyFactory implements BeanCopierKey { private  final String FIELD_0; private final String FIELD_1; private final boolean FIELD_2; public BeanCopier$BeanCopierKey$$KeyFactoryByCGLIB$$f32401fd() { } public Object newInstance(String var1, String var2, boolean var3) { return new BeanCopier$BeanCopierKey$$KeyFactoryByCGLIB$$f32401fd(var1, var2, var3); } public BeanCopier$BeanCopierKey$$KeyFactoryByCGLIB$$f32401fd(String var1, String var2, boolean var3) { this.FIELD_0 = var1; this.FIELD_1 = var2; this.FIELD_2 = var3; } // leave out methods like hashCode... }Copy the code

Continuing with the generator. create method, since Generator descends AbstractClassGenerator, which is a template class that Cglib uses to generate bytecode, The Generator’s super.create is actually called

AbstractClassGenerator’s create method will eventually call Generator’s template method generateClass. We won’t go into the details of AbstractClassGenerator. Focus on generateClass.

This is a method that generates Java classes, just like we would write code.

public void generateClass(ClassVisitor v) { Type sourceType = Type.getType(source); Type targetType = Type.getType(target); ClassEmitter ce = new ClassEmitter(v); // Start "write" class, Begin_class (Constants.V1_2, Constants.ACC_PUBLIC, getClassName(), BEAN_COPIER, null, Constants.SOURCE_FILE); // No constructor emitutils.null_constructor (ce); // Start "writing" a method with the name Emitter E = ce.begin_method(Constants.ACC_PUBLIC, copy, null); / / by Introspector access to the source and target class PropertyDescriptor PropertyDescriptor [] getters. = ReflectUtils getBeanGetters (source);  PropertyDescriptor[] setters = ReflectUtils.getBeanSetters(target); Map names = new HashMap(); for (int i = 0; i < getters.length; i++) { names.put(getters[i].getName(), getters[i]); } Local targetLocal = e.make_local(); Local sourceLocal = e.make_local(); if (useConverter) { e.load_arg(1); e.checkcast(targetType); e.store_local(targetLocal); e.load_arg(0); e.checkcast(sourceType); e.store_local(sourceLocal); } else { e.load_arg(1); e.checkcast(targetType); e.load_arg(0); e.checkcast(sourceType); } for (int I = 0; i < setters.length; i++) { PropertyDescriptor setter = setters[i]; Getter = (PropertyDescriptor)names.get(set.getName ()); if (getter ! = null) {/ / get accessor methods MethodInfo read. = ReflectUtils getMethodInfo (getter. GetReadMethod ()); MethodInfo write = ReflectUtils.getMethodInfo(setter.getWriteMethod()); If (useConverter) {Type setterType = write.getSignature().getargumentTypes ()[0]; e.load_local(targetLocal); e.load_arg(2); e.load_local(sourceLocal); e.invoke(read); e.box(read.getSignature().getReturnType()); EmitUtils.load_class(e, setterType); e.push(write.getSignature().getName()); e.invoke_interface(CONVERTER, CONVERT); e.unbox_or_zero(setterType); e.invoke(write); } else if (compatible(getter, setter)) {e.do2 (); e.invoke(read); e.invoke(write); } } } e.return_value(); e.end_method(); ce.end_class(); } private static boolean compatible(PropertyDescriptor getter, PropertyDescriptor setter) { // TODO: allow automatic widening conversions? return setter.getPropertyType().isAssignableFrom(getter.getPropertyType()); }Copy the code

If you don’t have to use Cglib to understand the code generation process, let’s look at the code generated without using useConverter:

public class Object$$BeanCopierByCGLIB$$d1d970c8 extends BeanCopier { public Object$$BeanCopierByCGLIB$$d1d970c8() { } public void copy(Object var1, Object var2, Converter var3) { TargetVO var10000 = (TargetVO)var2; SourceVO var10001 = (SourceVO)var1; var10000.setDate1(((SourceVO)var1).getDate1()); var10000.setIn(var10001.getIn()); var10000.setListData(var10001.getListData()); var10000.setMapData(var10001.getMapData()); var10000.setP1(var10001.getP1()); var10000.setP2(var10001.getP2()); var10000.setP3(var10001.getP3()); var10000.setPattr1(var10001.getPattr1()); }}Copy the code

Is the code that generates the code above the comparison wide open?

Now look at using useConverter:

public class Object$$BeanCopierByCGLIB$$d1d970c7 extends BeanCopier { private static final Class CGLIB$load_class$java$2Eutil$2EDate; private static final Class CGLIB$load_class$beanmapper_compare$2Evo$2ESourceVO$24Inner; private static final Class CGLIB$load_class$java$2Eutil$2EList; private static final Class CGLIB$load_class$java$2Eutil$2EMap; private static final Class CGLIB$load_class$java$2Elang$2EInteger; private static final Class CGLIB$load_class$java$2Elang$2ELong; private static final Class CGLIB$load_class$java$2Elang$2EByte; private static final Class CGLIB$load_class$java$2Elang$2EString; public Object$$BeanCopierByCGLIB$$d1d970c7() { } public void copy(Object var1, Object var2, Converter var3) { TargetVO var4 = (TargetVO)var2; SourceVO var5 = (SourceVO)var1; var4.setDate1((Date)var3.convert(var5.getDate1(), CGLIB$load_class$java$2Eutil$2EDate, "setDate1")); var4.setIn((Inner)var3.convert(var5.getIn(), CGLIB$load_class$beanmapper_compare$2Evo$2ESourceVO$24Inner, "setIn")); var4.setListData((List)var3.convert(var5.getListData(), CGLIB$load_class$java$2Eutil$2EList, "setListData")); var4.setMapData((Map)var3.convert(var5.getMapData(), CGLIB$load_class$java$2Eutil$2EMap, "setMapData")); var4.setP1((Integer)var3.convert(var5.getP1(), CGLIB$load_class$java$2Elang$2EInteger, "setP1")); var4.setP2((Long)var3.convert(var5.getP2(), CGLIB$load_class$java$2Elang$2ELong, "setP2")); var4.setP3((Byte)var3.convert(var5.getP3(), CGLIB$load_class$java$2Elang$2EByte, "setP3")); var4.setPattr1((String)var3.convert(var5.getPattr1(), CGLIB$load_class$java$2Elang$2EString, "setPattr1")); var4.setSeq((Long)var3.convert(var5.getSeq(), CGLIB$load_class$java$2Elang$2ELong, "setSeq")); } static void CGLIB$STATICHOOK1() { CGLIB$load_class$java$2Eutil$2EDate = Class.forName("java.util.Date"); CGLIB$load_class$beanmapper_compare$2Evo$2ESourceVO$24Inner = Class.forName("beanmapper_compare.vo.SourceVO$Inner"); CGLIB$load_class$java$2Eutil$2EList = Class.forName("java.util.List"); CGLIB$load_class$java$2Eutil$2EMap = Class.forName("java.util.Map"); CGLIB$load_class$java$2Elang$2EInteger = Class.forName("java.lang.Integer"); CGLIB$load_class$java$2Elang$2ELong = Class.forName("java.lang.Long"); CGLIB$load_class$java$2Elang$2EByte = Class.forName("java.lang.Byte"); CGLIB$load_class$java$2Elang$2EString = Class.forName("java.lang.String"); } static { CGLIB$STATICHOOK1(); }}Copy the code

summary

BeanCopier does perform well, but the source code shows that BeanCopier only copies properties with the same name and type. And if you use Converter, BeanCopier only copies properties with rules defined by Converter. So all attributes are considered in the convert method.

Dozer

use

BeanUtils and BeanCopier are both fairly simple, requiring the same property names and even the same type. However, in most cases this requirement is relatively harsh. Some VO cannot be modified for various reasons, and some VO objects are external interface SDK objects.

Some objects have different naming conventions, like humps, underscores, you name it all. So what we need is more flexible and rich functions, and even customized transformations.

Dozer supports implicit mapping of the same name, conversion of basic types to each other, display of specified mappings, exclude fields, recursive matching mapping, depth matching, date-formate to String, and so on. Support custom conversion Converter, support multiple use of a mapping definition, support EventListener and so on. Not only that, but Dozer also supports XML and annotations in a way that satisfies everyone’s taste. More features can be found here

It is not possible to demonstrate every one of them, but here is an overview. For more detailed functionality, or XML and annotation configuration, see the official documentation.

private Mapper dozerMapper;

    @Before
    public void setup(){
        dozerMapper = DozerBeanMapperBuilder.create()
                .withMappingBuilder(new BeanMappingBuilder() {
                    @Override
                    protected void configure() {
                        mapping(SourceVO.class, TargetVO.class)
                                .fields("fullName", "name")
                                .exclude("in");
                    }
                })
                .withCustomConverter(null)
                .withEventListener(null)
                .build();
    }
    
    @Test
    public void dozerTest(){
        SourceVO sourceVO = getSourceVO();
        log.info("sourceVO={}", GsonUtil.toJson(sourceVO));
        TargetVO map = dozerMapper.map(sourceVO, TargetVO.class);
        log.info("map={}", GsonUtil.toJson(map));
    }
Copy the code

The principle of

Dozer is essentially a reflection/Introspector implementation, but its rich functionality and support for multiple implementations (APIS, XML, annotations) make the code seem a bit complex, and we don’t need to look at these classes when we flip through the code, just to know what they do in general. Focus on the implementation of core processes and code. Let’s focus on the build method for building a Mapper and the Map method for implementing the map.

The build method is simple, it is an initialization of action, is through the user’s configuration will be used later to build a series of the configuration of the objects, the context object, or other object encapsulation, we don’t have to dig into how these objects is done, we can probably guess from the name of the object is doing, is responsible for what.

DozerBeanMapper(List<String> mappingFiles,
                    BeanContainer beanContainer,
                    DestBeanCreator destBeanCreator,
                    DestBeanBuilderCreator destBeanBuilderCreator,
                    BeanMappingGenerator beanMappingGenerator,
                    PropertyDescriptorFactory propertyDescriptorFactory,
                    List<CustomConverter> customConverters,
                    List<MappingFileData> mappingsFileData,
                    List<EventListener> eventListeners,
                    CustomFieldMapper customFieldMapper,
                    Map<String, CustomConverter> customConvertersWithId,
                    ClassMappings customMappings,
                    Configuration globalConfiguration,
                    CacheManager cacheManager) {
        this.beanContainer = beanContainer;
        this.destBeanCreator = destBeanCreator;
        this.destBeanBuilderCreator = destBeanBuilderCreator;
        this.beanMappingGenerator = beanMappingGenerator;
        this.propertyDescriptorFactory = propertyDescriptorFactory;
        this.customConverters = new ArrayList<>(customConverters);
        this.eventListeners = new ArrayList<>(eventListeners);
        this.mappingFiles = new ArrayList<>(mappingFiles);
        this.customFieldMapper = customFieldMapper;
        this.customConvertersWithId = new HashMap<>(customConvertersWithId);
        this.eventManager = new DefaultEventManager(eventListeners);
        this.customMappings = customMappings;
        this.globalConfiguration = globalConfiguration;
        this.cacheManager = cacheManager;
    }
Copy the code

The map method is the process of mapping objects, the entry point of which is the mapGeneral method of MappingProcessor

private <T> T mapGeneral(Object srcObj, final Class<T> destClass, final T destObj, final String mapId) { srcObj = MappingUtils.deProxy(srcObj, beanContainer); Class<T> destType; T result; if (destClass == null) { destType = (Class<T>)destObj.getClass(); result = destObj; } else { destType = destClass; result = null; } ClassMap classMap = null; ClassMap = getClassMap(srcobj.getClass (), destType, mapId); // Register eventManager.on(new DefaultEvent(EventTypes.MAPPING_STARTED, classMap, NULL, srcObj, result, null)); // See if there is a custom Converter Class<? > converterClass = MappingUtils.findCustomConverter(converterByDestTypeCache, classMap.getCustomConverters(), srcObj .getClass(), destType); if (destObj == null) { // If this is a nested MapperAware conversion this mapping can be already processed // but we can  do this optimization only in case of no destObject, instead we must copy to the dest object Object alreadyMappedValue = mappedFields.getMappedValue(srcObj, destType, mapId); if (alreadyMappedValue ! = null) { return (T)alreadyMappedValue; } // Custom Converter is preferred for mapping if (converterClass! = null) { return (T)mapUsingCustomConverter(converterClass, srcObj.getClass(), srcObj, destType, result, null, true); } / / is to encapsulate configuration BeanCreationDirective creationDirective = new BeanCreationDirective (srcObj, classMap getSrcClassToMap (), classMap.getDestClassToMap(), destType, classMap.getDestClassBeanFactory(), classMap.getDestClassBeanFactoryId(), classMap.getDestClassCreateMethod(), classMap.getDestClass().isSkipConstructor()); / / continue mapping result = createByCreationDirectiveAndMap (creationDirective, classMap srcObj, result, false, null); } catch (Throwable e) { MappingUtils.throwMappingException(e); } eventManager.on(new DefaultEvent(EventTypes.MAPPING_FINISHED, classMap, null, srcObj, result, null)); return result; }Copy the code

Normally createByCreationDirectiveAndMap method will always call to mapFromFieldMap method, and in the absence of a custom converter will call mapOrRecurseObject method

In most cases, field mappings are generally resolved in this method

private Object mapOrRecurseObject(Object srcObj, Object srcFieldValue, Class<? > destFieldType, FieldMap fieldMap, Object destObj) { Class<? > srcFieldClass = srcFieldValue ! = null ? srcFieldValue.getClass() : fieldMap.getSrcFieldType(srcObj.getClass()); Class<? > converterClass = MappingUtils.determineCustomConverter(fieldMap, converterByDestTypeCache, fieldMap.getClassMap() .getCustomConverters(), srcFieldClass, destFieldType); // Custom Converter processing if (converterClass! = null) { return mapUsingCustomConverter(converterClass, srcFieldClass, srcFieldValue, destFieldType, destObj, fieldMap, false); } if (srcFieldValue == null) { return null; } String srcFieldName = fieldMap.getSrcFieldName(); String destFieldName = fieldMap.getDestFieldName(); if (! (DozerConstants.SELF_KEYWORD.equals(srcFieldName) && DozerConstants.SELF_KEYWORD.equals(destFieldName))) { Object alreadyMappedValue = mappedFields.getMappedValue(srcFieldValue, destFieldType, fieldMap.getMapId()); if (alreadyMappedValue ! = null) { return alreadyMappedValue; }} / / if only shallow copy is returned directly (configurable) if (fieldMap. IsCopyByReference ()) {/ / just get the SRC and return it, no transformation. return srcFieldValue; } / / handling of Map types Boolean isSrcFieldClassSupportedMap = MappingUtils. IsSupportedMap (srcFieldClass); boolean isDestFieldTypeSupportedMap = MappingUtils.isSupportedMap(destFieldType); if (isSrcFieldClassSupportedMap && isDestFieldTypeSupportedMap) { return mapMap(srcObj, (Map<? ,? >)srcFieldValue, fieldMap, destObj); } if (fieldMap instanceof MapFieldMap && destFieldType.equals(Object.class)) { destFieldType = fieldMap.getDestHintContainer() ! = null ? fieldMap.getDestHintContainer().getHint() : srcFieldClass; } / processing / / / the basic types of mapping PrimitiveOrWrapperConverter class support is compatible with the mutual transformation between the basic types of the if (primitiveConverter. Accepts (srcFieldClass) | | primitiveConverter.accepts(destFieldType)) { // Primitive or Wrapper conversion if (fieldMap.getDestHintContainer() ! = null) { Class<? > destHintType = fieldMap.getDestHintType(srcFieldValue.getClass()); // if the destType is null this means that there was more than one hint. // we must have already set the destType then. if (destHintType ! = null) { destFieldType = destHintType; } } //#1841448 - if trim-strings=true, then use a trimmed src string value when converting to dest value Object convertSrcFieldValue = srcFieldValue; if (fieldMap.isTrimStrings() && srcFieldValue.getClass().equals(String.class)) { convertSrcFieldValue = ((String)srcFieldValue).trim(); } DateFormatContainer dfContainer = new DateFormatContainer(fieldMap.getDateFormat()); if (fieldMap instanceof MapFieldMap && ! primitiveConverter.accepts(destFieldType)) { return primitiveConverter.convert(convertSrcFieldValue, convertSrcFieldValue.getClass(), dfContainer); } else { return primitiveConverter.convert(convertSrcFieldValue, destFieldType, dfContainer, destFieldName, destObj); }} / / the collection types of mapping to deal with the if (MappingUtils. IsSupportedCollection && (srcFieldClass) (MappingUtils.isSupportedCollection(destFieldType))) { return mapCollection(srcObj, srcFieldValue, fieldMap, destObj); If (mappingutils. isEnumType(srcFieldClass, destFieldType)) {return mapEnum((Enum)srcFieldValue, (Class<Enum>)destFieldType); } if (fieldMap.getDestDeepIndexHintContainer() ! = null) { destFieldType = fieldMap.getDestDeepIndexHintContainer().getHint(); Return mapCustomObject(fieldMap, destObj, destFieldType, destFieldName, srcFieldValue); }Copy the code

MapCustomObject method. In fact you will find that it is the most important thing is to do a recursive processing, whether it’s the last call createByCreationDirectiveAndMap or mapToDestObject method.

private Object mapCustomObject(FieldMap fieldMap, Object destObj, Class<? > destFieldType, String destFieldName, Object srcFieldValue) { srcFieldValue = MappingUtils.deProxy(srcFieldValue, beanContainer); // Custom java bean. Need to make sure that the destination object is not // already instantiated. Object result = null;  // in case of iterate feature new objects are created in any case if (! DozerConstants.ITERATE.equals(fieldMap.getDestFieldType())) { result = getExistingValue(fieldMap, destObj, destFieldType); } // if the field is not null than we don't want a new instance if (result == null) { // first check to see if this plain old field map has hints to the actual // type. if (fieldMap.getDestHintContainer() ! = null) { Class<? > destHintType = fieldMap.getDestHintType(srcFieldValue.getClass()); // if the destType is null this means that there was more than one hint. // we must have already set the destType then. if (destHintType ! = null) { destFieldType = destHintType; } } // Check to see if explicit map-id has been specified for the field // mapping String mapId = fieldMap.getMapId(); Class<? > targetClass; if (fieldMap.getDestHintContainer() ! = null && fieldMap.getDestHintContainer().getHint() ! = null) { targetClass = fieldMap.getDestHintContainer().getHint(); } else { targetClass = destFieldType; } ClassMap classMap = getClassMap(srcFieldValue.getClass(), targetClass, mapId); BeanCreationDirective creationDirective = new BeanCreationDirective(srcFieldValue, classMap.getSrcClassToMap(), classMap.getDestClassToMap(), destFieldType, classMap.getDestClassBeanFactory(), classMap.getDestClassBeanFactoryId(), fieldMap.getDestFieldCreateMethod() ! = null ? fieldMap.getDestFieldCreateMethod() : classMap.getDestClassCreateMethod(), classMap.getDestClass().isSkipConstructor(), destObj, destFieldName); result = createByCreationDirectiveAndMap(creationDirective, classMap, srcFieldValue, null, false, fieldMap.getMapId()); } else { mapToDestObject(null, srcFieldValue, result, false, fieldMap.getMapId()); } return result; }Copy the code

summary

Dozer is powerful, but it still uses reflection at the bottom, so in performance tests it performs only so-so, second only to Apache’s BeanUtils. If you’re not looking for performance, you can.

Orika

Orika can be said to almost integrate the advantages of the above tools, not only has a rich function, the underlying use of Javassist to generate bytecode, very efficient operation.

use

Orika basically supports the functions supported by Dozer. Here I will briefly introduce the use of Orika. For more detailed APIS, please refer to the User Guide.

private MapperFactory mapperFactory;

@Before
public void setup() {
    mapperFactory = new DefaultMapperFactory.Builder().build();
    ConverterFactory converterFactory = mapperFactory.getConverterFactory();
    converterFactory.registerConverter(new TypeConverter());
    mapperFactory.classMap(SourceVO.class, TargetVO.class)
            .field("fullName", "name")
            .field("type", "enumType")
            .exclude("in")
            .byDefault()
            .register();
}

@Test
public void main() {
    MapperFacade mapper = mapperFactory.getMapperFacade();
    SourceVO sourceVO = getSourceVO();
    log.info("sourceVO={}", GsonUtil.toJson(sourceVO));
    TargetVO map = mapper.map(sourceVO, TargetVO.class);
    log.info("map={}", GsonUtil.toJson(map));
}
Copy the code

The principle of

To explain the implementation, let’s first look at what Orika is doing behind the scenes.

By adding the following configuration, we can see the source code and bytecode of the Mapper generated by Orika during mapping.

System.setProperty("ma.glasnost.orika.writeSourceFiles", "true");
System.setProperty("ma.glasnost.orika.writeClassFiles", "true");
System.setProperty("ma.glasnost.orika.writeSourceFilesToPath", "path");
System.setProperty("ma.glasnost.orika.writeClassFilesToPath", "path");
Copy the code

Using the above example, let’s look at the Java code generated by Orika:

package ma.glasnost.orika.generated;

public class Orika_TargetVO_SourceVO_Mapper947163525829122$0 extends ma.glasnost.orika.impl.GeneratedMapperBase {

	public void mapAtoB(java.lang.Object a, java.lang.Object b, ma.glasnost.orika.MappingContext mappingContext) {


super.mapAtoB(a, b, mappingContext);


// sourceType: SourceVO
beanmapper_compare.vo.SourceVO source = ((beanmapper_compare.vo.SourceVO)a); 
// destinationType: TargetVO
beanmapper_compare.vo.TargetVO destination = ((beanmapper_compare.vo.TargetVO)b); 


destination.setName(((java.lang.String)source.getFullName())); 
if ( !(((java.lang.Integer)source.getType()) == null)){ 
destination.setEnumType(((beanmapper_compare.vo.TargetVO.EnumType)((ma.glasnost.orika.Converter)usedConverters[0]).convert(((java.lang.Integer)source.getType()), ((ma.glasnost.orika.metadata.Type)usedTypes[0]), mappingContext))); 
} else { 
destination.setEnumType(null);
 }
if ( !(((java.util.Date)source.getDate1()) == null)){ 
destination.setDate1(((java.util.Date)((ma.glasnost.orika.Converter)usedConverters[1]).convert(((java.util.Date)source.getDate1()), ((ma.glasnost.orika.metadata.Type)usedTypes[1]), mappingContext))); 
} else { 
destination.setDate1(null);
 }if ( !(((java.util.List)source.getListData()) == null)) {

java.util.List new_listData = ((java.util.List)new java.util.ArrayList()); 

new_listData.addAll(mapperFacade.mapAsList(((java.util.List)source.getListData()), ((ma.glasnost.orika.metadata.Type)usedTypes[2]), ((ma.glasnost.orika.metadata.Type)usedTypes[3]), mappingContext)); 
destination.setListData(new_listData); 
} else {
 if ( !(((java.util.List)destination.getListData()) == null)) {
destination.setListData(null);
};
}if ( !(((java.util.Map)source.getMapData()) == null)){

java.util.Map new_mapData = ((java.util.Map)new java.util.LinkedHashMap()); 
for( java.util.Iterator mapData_$_iter = ((java.util.Map)source.getMapData()).entrySet().iterator(); mapData_$_iter.hasNext(); ) { 

java.util.Map.Entry sourceMapDataEntry = ((java.util.Map.Entry)mapData_$_iter.next()); 
java.lang.Integer newMapDataKey = null; 
java.util.List newMapDataVal = null; 
if ( !(((java.lang.Long)sourceMapDataEntry.getKey()) == null)){ 
newMapDataKey = ((java.lang.Integer)((ma.glasnost.orika.Converter)usedConverters[2]).convert(((java.lang.Long)sourceMapDataEntry.getKey()), ((ma.glasnost.orika.metadata.Type)usedTypes[3]), mappingContext)); 
} else { 
newMapDataKey = null;
 }
if ( !(((java.util.List)sourceMapDataEntry.getValue()) == null)) {

java.util.List new_newMapDataVal = ((java.util.List)new java.util.ArrayList()); 

new_newMapDataVal.addAll(mapperFacade.mapAsList(((java.util.List)sourceMapDataEntry.getValue()), ((ma.glasnost.orika.metadata.Type)usedTypes[2]), ((ma.glasnost.orika.metadata.Type)usedTypes[4]), mappingContext)); 
newMapDataVal = new_newMapDataVal; 
} else {
 if ( !(newMapDataVal == null)) {
newMapDataVal = null;
};
}
new_mapData.put(newMapDataKey, newMapDataVal); 

}
destination.setMapData(new_mapData); 
} else {
 destination.setMapData(null);
}
destination.setP1(((java.lang.Integer)source.getP1())); 
destination.setP2(((java.lang.Long)source.getP2())); 
destination.setP3(((java.lang.Byte)source.getP3())); 
destination.setPattr1(((java.lang.String)source.getPattr1())); 
if ( !(((java.lang.String)source.getSeq()) == null)){ 
destination.setSeq(((java.lang.Long)((ma.glasnost.orika.Converter)usedConverters[3]).convert(((java.lang.String)source.getSeq()), ((ma.glasnost.orika.metadata.Type)usedTypes[2]), mappingContext))); 
} else { 
destination.setSeq(null);
 }
		if(customMapper != null) { 
			 customMapper.mapAtoB(source, destination, mappingContext);
		}
	}

	public void mapBtoA(java.lang.Object a, java.lang.Object b, ma.glasnost.orika.MappingContext mappingContext) {


super.mapBtoA(a, b, mappingContext);


// sourceType: TargetVO
beanmapper_compare.vo.TargetVO source = ((beanmapper_compare.vo.TargetVO)a); 
// destinationType: SourceVO
beanmapper_compare.vo.SourceVO destination = ((beanmapper_compare.vo.SourceVO)b); 


destination.setFullName(((java.lang.String)source.getName())); 
if ( !(((beanmapper_compare.vo.TargetVO.EnumType)source.getEnumType()) == null)){ 
destination.setType(((java.lang.Integer)((ma.glasnost.orika.Converter)usedConverters[0]).convert(((beanmapper_compare.vo.TargetVO.EnumType)source.getEnumType()), ((ma.glasnost.orika.metadata.Type)usedTypes[3]), mappingContext))); 
} else { 
destination.setType(null);
 }
if ( !(((java.util.Date)source.getDate1()) == null)){ 
destination.setDate1(((java.util.Date)((ma.glasnost.orika.Converter)usedConverters[1]).convert(((java.util.Date)source.getDate1()), ((ma.glasnost.orika.metadata.Type)usedTypes[1]), mappingContext))); 
} else { 
destination.setDate1(null);
 }if ( !(((java.util.List)source.getListData()) == null)) {

java.util.List new_listData = ((java.util.List)new java.util.ArrayList()); 

new_listData.addAll(mapperFacade.mapAsList(((java.util.List)source.getListData()), ((ma.glasnost.orika.metadata.Type)usedTypes[3]), ((ma.glasnost.orika.metadata.Type)usedTypes[2]), mappingContext)); 
destination.setListData(new_listData); 
} else {
 if ( !(((java.util.List)destination.getListData()) == null)) {
destination.setListData(null);
};
}if ( !(((java.util.Map)source.getMapData()) == null)){

java.util.Map new_mapData = ((java.util.Map)new java.util.LinkedHashMap()); 
for( java.util.Iterator mapData_$_iter = ((java.util.Map)source.getMapData()).entrySet().iterator(); mapData_$_iter.hasNext(); ) { 

java.util.Map.Entry sourceMapDataEntry = ((java.util.Map.Entry)mapData_$_iter.next()); 
java.lang.Long newMapDataKey = null; 
java.util.List newMapDataVal = null; 
if ( !(((java.lang.Integer)sourceMapDataEntry.getKey()) == null)){ 
newMapDataKey = ((java.lang.Long)((ma.glasnost.orika.Converter)usedConverters[2]).convert(((java.lang.Integer)sourceMapDataEntry.getKey()), ((ma.glasnost.orika.metadata.Type)usedTypes[2]), mappingContext)); 
} else { 
newMapDataKey = null;
 }
if ( !(((java.util.List)sourceMapDataEntry.getValue()) == null)) {

java.util.List new_newMapDataVal = ((java.util.List)new java.util.ArrayList()); 

new_newMapDataVal.addAll(mapperFacade.mapAsList(((java.util.List)sourceMapDataEntry.getValue()), ((ma.glasnost.orika.metadata.Type)usedTypes[4]), ((ma.glasnost.orika.metadata.Type)usedTypes[2]), mappingContext)); 
newMapDataVal = new_newMapDataVal; 
} else {
 if ( !(newMapDataVal == null)) {
newMapDataVal = null;
};
}
new_mapData.put(newMapDataKey, newMapDataVal); 

}
destination.setMapData(new_mapData); 
} else {
 destination.setMapData(null);
}
destination.setP1(((java.lang.Integer)source.getP1())); 
destination.setP2(((java.lang.Long)source.getP2())); 
destination.setP3(((java.lang.Byte)source.getP3())); 
destination.setPattr1(((java.lang.String)source.getPattr1())); 
if ( !(((java.lang.Long)source.getSeq()) == null)){ 
destination.setSeq(((java.lang.String)((ma.glasnost.orika.Converter)usedConverters[4]).convert(((java.lang.Long)source.getSeq()), ((ma.glasnost.orika.metadata.Type)usedTypes[5]), mappingContext))); 
} else { 
destination.setSeq(null);
 }
		if(customMapper != null) { 
			 customMapper.mapBtoA(source, destination, mappingContext);
		}
	}

}
Copy the code

SRC -> SRC; SRC -> dest; SRC -> SRC; SRC -> dest; SRC -> dest;

Ok, so let’s look at the implementation.

The use of Orika is similar to Dozer’s in that a MapperFactory is generated through configuration first, and then a MapperFacade is used as a unified entry point for mapping, where both MapperFactory and MapperFacade are singleton. MapperFactory only registers the ClassMap when it does the configuration class mapping. It does not actually generate the bytecode of mapper. It initializes mapper when it first calls the getMapperFacade method. Let’s look at the getMapperFacade.

(Source: Ma.glasnost.Orika: ORIKA-Core :1.5.4)

public MapperFacade getMapperFacade() { if (! isBuilt) { synchronized (mapperFacade) { if (! isBuilt) { build(); } } } return mapperFacade; }Copy the code

Mapper is constructed using registered ClassMap information and MappingContext context information

public synchronized void build() { if (! isBuilding && ! isBuilt) { isBuilding = true; MappingContext context = contextFactory.getContext(); try { if (useBuiltinConverters) { BuiltinConverters.register(converterFactory); } converterFactory.setMapperFacade(mapperFacade); for (Map.Entry<MapperKey, ClassMap<Object, Object>> classMapEntry : classMapRegistry.entrySet()) { ClassMap<Object, Object> classMap = classMapEntry.getValue(); if (classMap.getUsedMappers().isEmpty()) { classMapEntry.setValue(classMap.copyWithUsedMappers(discoverUsedMappers(classMap))); } } buildClassMapRegistry(); Map<ClassMap<? ,? >, GeneratedMapperBase> generatedMappers = new HashMap<ClassMap<? ,? >, GeneratedMapperBase>(); // When using mapperFactory to configure classMap, it is stored in the classMapRegistry for (classMap <? ,? > classMap : Classmapregistry.values ()) {// Generate a mapper for each classMap, GeneratedMappers. Put (classMap, buildMapper(classMap, false, context)); } Set<Entry<ClassMap<? ,? >, GeneratedMapperBase>> generatedMapperEntries = generatedMappers.entrySet(); for (Entry<ClassMap<? ,? >, GeneratedMapperBase> generatedMapperEntry : generatedMapperEntries) { buildObjectFactories(generatedMapperEntry.getKey(), context); initializeUsedMappers(generatedMapperEntry.getValue(), generatedMapperEntry.getKey(), context); } } finally { contextFactory.release(context); } isBuilt = true; isBuilding = false; } } public Set<ClassMap<Object, Object>> lookupUsedClassMap(MapperKey mapperKey) { Set<ClassMap<Object, Object>> usedClassMapSet = usedMapperMetadataRegistry.get(mapperKey); if (usedClassMapSet == null) { usedClassMapSet = Collections.emptySet(); } return usedClassMapSet; }Copy the code

Trace the buildMapper method

private GeneratedMapperBase buildMapper(ClassMap<? ,? > classMap, boolean isAutoGenerated, MappingContext context) { register(classMap.getAType(), classMap.getBType(), isAutoGenerated); register(classMap.getBType(), classMap.getAType(), isAutoGenerated); final MapperKey mapperKey = new MapperKey(classMap.getAType(), classMap.getBType()); // call the build method of mapperGenerator to generate mapper final GeneratedMapperBase mapper = mappergenerator. build(classMap, context); mapper.setMapperFacade(mapperFacade); mapper.setFromAutoMapping(isAutoGenerated); if (classMap.getCustomizedMapper() ! = null) { final Mapper<Object, Object> customizedMapper = (Mapper<Object, Object>) classMap.getCustomizedMapper(); mapper.setCustomMapper(customizedMapper); } mappersRegistry.remove(mapper); // Save the generated mapper to mappersRegistry mappersRegistry. Add (mapper); classMapRegistry.put(mapperKey, (ClassMap<Object, Object>) classMap); return mapper; }Copy the code

Build method of MapperGenerator

public GeneratedMapperBase build(ClassMap<? ,? > classMap, MappingContext context) { StringBuilder logDetails = null; try { compilerStrategy.assureTypeIsAccessible(classMap.getAType().getRawType()); compilerStrategy.assureTypeIsAccessible(classMap.getBType().getRawType()); if (LOGGER.isDebugEnabled()) { logDetails = new StringBuilder(); String srcName = TypeFactory.nameOf(classMap.getAType(), classMap.getBType()); String dstName = TypeFactory.nameOf(classMap.getBType(), classMap.getAType()); logDetails.append("Generating new mapper for (" + srcName + ", " + dstName + ")"); } / / build the context used to generate the source code and bytecode final SourceCodeContext mapperCode = new SourceCodeContext (classMap. GetMapperClassName (), GeneratedMapperBase.class, context, logDetails); Set<FieldMap> mappedFields = new LinkedHashSet<FieldMap>(); // Add mapAtoB method mappedfields. addAll(addMapMethod(mapperCode, true, classMap, logDetails)); //add mapBtoA method //addMapMethod method is basically handwritten code process, Mappedfields.addall (addMapMethod(mapperCode, false, classMap, logDetails)); GeneratedMapperBase instance = mappercode.getInstance (); instance.setAType(classMap.getAType()); instance.setBType(classMap.getBType()); instance.setFavorsExtension(classMap.favorsExtension()); if (logDetails ! = null) { LOGGER.debug(logDetails.toString()); logDetails = null; } classMap = classMap.copy(mappedFields); context.registerMapperGeneration(classMap); return instance; } catch (final Exception e) { if (logDetails ! = null) { logDetails.append("\n<---- ERROR occurred here"); LOGGER.debug(logDetails.toString()); } throw new MappingException(e); }Copy the code

Generate mapper instances

T instance = (T) compileClass().newInstance();

protected Class<?> compileClass() throws SourceCodeGenerationException {
        try {
            return compilerStrategy.compileClass(this);
        } catch (SourceCodeGenerationException e) {
            throw e;
        }
    }
Copy the code

The default for compilerStrategy is Javassist (you can also customize the bytecode generation strategy)

JavassistCompilerStrategy compileClass method

It’s basically a process of using Javassist, and you’ve come to this point through all the paving (through configuration information, context information, piecing together Java source code, and so on)

public Class<? > compileClass(SourceCodeContext sourceCode) throws SourceCodeGenerationException { StringBuilder className = new StringBuilder(sourceCode.getClassName()); CtClass byteCodeClass = null; int attempts = 0; Random rand = RANDOM; While (byteCodeClass == null) {try {// Create a class byteCodeClass = classpol.makeclass (classname.tostring ()); } catch (RuntimeException e) { if (attempts < 5) { className.append(Integer.toHexString(rand.nextInt())); } else { // No longer likely to be accidental name collision; // propagate the error throw e; } } } CtClass abstractMapperClass; Class<? > compiledClass; // write the sourceCode to disk (with the configuration mentioned above) writeSourceFile(sourceCode); Boolean existing = superClasses.put(sourceCode.getSuperClass(), true); if (existing == null || ! existing) { classPool.insertClassPath(new ClassClassPath(sourceCode.getSuperClass())); } if (registerClassLoader(Thread.currentThread().getContextClassLoader())) { classPool.insertClassPath(new LoaderClassPath(Thread.currentThread().getContextClassLoader())); } abstractMapperClass = classPool.get(sourceCode.getSuperClass().getCanonicalName()); byteCodeClass.setSuperclass(abstractMapperClass); // Add field for (String fieldDef: sourceCode.getFields()) { try { byteCodeClass.addField(CtField.make(fieldDef, byteCodeClass)); } catch (CannotCompileException e) { LOG.error("An exception occurred while compiling: " + fieldDef + " for " + sourceCode.getClassName(), e); throw e; } // add a method to a class using Javassist (String methodDef) sourceCode.getMethods()) { try { byteCodeClass.addMethod(CtNewMethod.make(methodDef, byteCodeClass)); } catch (CannotCompileException e) { LOG.error( "An exception occured while compiling the following method:\n\n " + methodDef + "\n\n for " + sourceCode.getClassName() + "\n", e); throw e; } // Generate class compiledClass = bytecodeclas.toclass (thread.currentThread ().getContextClassLoader(), this.getClass().getProtectionDomain()); WriteClassFile (sourceCode, byteCodeClass); } catch (NotFoundException e) { throw new SourceCodeGenerationException(e); } catch (CannotCompileException e) { throw new SourceCodeGenerationException("Error compiling " + sourceCode.getClassName(), e); } catch (IOException e) { throw new SourceCodeGenerationException("Could not write files for " + sourceCode.getClassName(), e); } return compiledClass; }Copy the code

Okay, the Mapper class is generated, and now it’s time to see how the Mapper class is used by the Map method that calls the MapperFacade.

In the resolveMappingStrategy method, the mapper is found in mappersRegistry based on typeA and typeB. Call mapper’s mapAtoB or mapBtoA method.

summary

Overall, Orika is a powerful and high performance tool and is recommended.

conclusion

By comparing BeanUtils, BeanCopier, Dozer, and Orika, we can see their performance and implementation principles. When using Orika, we can choose according to our actual situation and recommend Orika.

The resources

The correct way to open orika