In the last article, I showed you how Mybatis is built. In general, the build part is a mapping of configuration files. Another important part of Mybatis is how to use these configuration files to encapsulate the configuration objects to execute the user specified SQL statements and encapsulate the result set into the type required by the user.

Write source code analysis has also written several articles, I think if you really want to understand the principle of source code, must be hands-on, just read the article is useless, you need to try to debug, because I wrote some source code analysis length is longer, if just interested in the best collection to see, Don’t be stingy with your likes if you think it’s good (#^.^#)

Since DefaultSqlSession

In the previous article, we learned that we need to create a Sql session to perform CRUD operations. Mybatis refers to the SqlSession interface. There is a default implementation class DefaultSqlSession, which is generally used.

Let’s take a look at some of the method definitions in DefaultSqlSession, which are mostly CRUD operations.

public class DefaultSqlSession implements SqlSession {

  // Configure objects
  private Configuration configuration;
  / / actuator
  private Executor executor;
  // Whether to commit automatically
  private boolean autoCommit;
  private boolean dirty;
  
  public DefaultSqlSession(Configuration configuration, Executor executor, boolean autoCommit) {}

  public DefaultSqlSession(Configuration configuration, Executor executor) {}

  @Override
  public <T> T selectOne(String statement) {}

  / / core selectOne
  @Override
  public <T> T selectOne(String statement, Object parameter) {}

  @Override
  public <K, V> Map<K, V> selectMap(String statement, String mapKey) {}

  @Override
  public <K, V> Map<K, V> selectMap(String statement, Object parameter, String mapKey) {}

  / / core selectMap
  @Override
  public <K, V> Map<K, V> selectMap(String statement, Object parameter, String mapKey, RowBounds rowBounds) {}

  @Override
  public <E> List<E> selectList(String statement) {}

  @Override
  public <E> List<E> selectList(String statement, Object parameter) {}

  / / core selectList
  @Override
  public <E> List<E> selectList(String statement, Object parameter, RowBounds rowBounds) {}

  @Override
  public void select(String statement, Object parameter, ResultHandler handler) {}

  @Override
  public void select(String statement, ResultHandler handler) {}

  // Core select, with a ResultHandler, and the selectList code is similar, the difference is a ResultHandler
  @Override
  public void select(String statement, Object parameter, RowBounds rowBounds, ResultHandler handler) {}

  @Override
  public int insert(String statement) {}

  @Override
  public int insert(String statement, Object parameter) {}

  @Override
  public int update(String statement) {}

  / / core update
  @Override
  public int update(String statement, Object parameter) {}

  @Override
  public int delete(String statement) {}

  @Override
  public int delete(String statement, Object parameter) {}

  @Override
  public void commit(a) {}

  / / core commit
  @Override
  public void commit(boolean force) {}

  @Override
  public void rollback(a) {}

  / / core rollback
  @Override
  public void rollback(boolean force) {}

  / / core flushStatements
  @Override
  public List<BatchResult> flushStatements(a) {}

  / / core close
  @Override
  public void close(a) {}

  @Override
  public Configuration getConfiguration(a) {returnconfiguration; }// MapperRegistry. GetMapper is called
  @Override
  public <T> T getMapper(Class<T> type) {}

  @Override
  public Connection getConnection(a) {}

  / / core clearCache
  @Override
  public void clearCache(a) {}

  // Check whether commit or rollback is required
  private boolean isCommitOrRollbackRequired(boolean force) {}

  // Wrap parameters as collections
  private Object wrapCollection(final Object object) {}

  // Strict Map. If no key is found, throw BindingException instead of returning null
  public static class StrictMap<V> extends HashMap<String.V> {}}Copy the code

Select from Mybatis select from Mybatis select from Mybatis

Mybatis entry to perform the select operation

DefaultSqlSession source code above I just posted some method definitions, If you look at the source code, you’ll see that many select methods call the same selectList method (including selectOne, which is also the called selectList but only takes the first element). The source code for this method is as follows.

public <E> List<E> selectList(String statement, Object parameter, RowBounds rowBounds) {
    try {
      // Find the MappedStatement based on the Statement ID
      MappedStatement ms = configuration.getMappedStatement(statement);
      // Use the executor instead to query the result. Note that the ResultHandler passed in here is null
      // rowBounds are used for logical paging operations
      return executor.query(ms, wrapCollection(parameter), rowBounds, Executor.NO_RESULT_HANDLER);
    } catch (Exception e) {
      throw ExceptionFactory.wrapException("Error querying database. Cause: " + e, e);
    } finally{ ErrorContext.instance().reset(); }}Copy the code

It’s a simple two-step process.

  1. According to thestatementIdConfigurationTo obtain the correspondingMappedStatementObject.
  2. Pass in the fetchMappedStatementIt then passes in the actuatorExecutorThe queryqueryMethod and call to return the result. They also did the right parametersparameterPackaging operation and so on

When I was analyzing the source code, there was a question, which Executor is this Executor? Because Executor is just an interface, and DefaultSqlSession defines a field and does not instantiate it, which Executor implementation class is in the interface?

The cause of this problem needs to be found during our initialization. DefaultSqlSession = DefaultSqlSessionFactory = DefaultSqlSessionFactory = DefaultSqlSessionFactory = DefaultSqlSessionFactory The DefaultSqlSessionFactory is through DefaultSqlSessionFactoryBuilder construct.

/ / create DefaultSqlSessionFactory
/ / this is in the DefaultSqlSessionFactoryBuilder method
public SqlSessionFactory build(Configuration config) {
return new DefaultSqlSessionFactory(config);
}
Copy the code

After we have the factory, we need to create DefaultSqlSession using the Factory mode.

// Many of these openSession creation methods still end up doing so
/ / call openSessionFromDataSource
@Override
public SqlSession openSession(a) {
  // A defaultExecutorType is passed in
  return openSessionFromDataSource(configuration.getDefaultExecutorType(), null.false);
}
// DefaultExecutorType is returned
public ExecutorType getDefaultExecutorType(a) {
  return defaultExecutorType;
}

private SqlSession openSessionFromDataSource(ExecutorType execType, TransactionIsolationLevel level, boolean autoCommit) {
    Transaction tx = null;
    try {
      final Environment environment = configuration.getEnvironment();
      final TransactionFactory transactionFactory = getTransactionFactoryFromEnvironment(environment);
      // Use the transaction factory to generate a transaction
      tx = transactionFactory.newTransaction(environment.getDataSource(), level, autoCommit);
      // Generate an executor (the transaction is contained in the executor)
      // Here you can see that the actuator is still created in Configuration
      // We know that the type passed above is of type default
      final Executor executor = configuration.newExecutor(tx, execType);
      // Then generate a DefaultSqlSession
      // The actuator is passed in here
      return new DefaultSqlSession(configuration, executor, autoCommit);
    } catch (Exception e) {
      // If there is an error in opening the transaction, close it
      closeTransaction(tx); // may have fetched a connection so lets call close()
      throw ExceptionFactory.wrapException("Error opening session. Cause: " + e, e);
    } finally {
      // Finally clear the error contextErrorContext.instance().reset(); }}Copy the code

This is the build process for DefaultSqlSession, and you can see that the Executor we are looking for is instantiated here and eventually created in the Configuration.

public Executor newExecutor(Transaction transaction, ExecutorType executorType) {
    executorType = executorType == null ? defaultExecutorType : executorType;
    // If defaultExecutorType is set to null
    executorType = executorType == null ? ExecutorType.SIMPLE : executorType;
    Executor executor;
    / / and then is simple three branches, there are 3 actuators BatchExecutor/ReuseExecutor/SimpleExecutor
    if (ExecutorType.BATCH == executorType) {
      executor = new BatchExecutor(this, transaction);
    } else if (ExecutorType.REUSE == executorType) {
      executor = new ReuseExecutor(this, transaction);
    } else {
      executor = new SimpleExecutor(this, transaction);
    }
    // If caching is required, generate another CachingExecutor(with caching by default), decorator mode, so CachingExecutor is returned by default
    if (cacheEnabled) {
      executor = new CachingExecutor(executor);
    }
    // Call the plug-in here, which allows you to change Executor behavior
    executor = (Executor) interceptorChain.pluginAll(executor);
    return executor;
}
Copy the code

From the above source code, we can conclude that if Executor is specified then an Executor of the corresponding type will be created, and if not DefaultExecutor will be created by default. But regardless of the type, if you specify cacheEnabled to true in Configuartion (that is, caching is enabled), the original Executor is rewrapped with a CachingExecutor, So if you turn caching on, what you get back is CachingExecutor. Using the decorator pattern here, we can look at a UML diagram of the executor.

Of course, one more question you might have at this point is, when is this cacheEnabled initialized? I’m going to be brief here, remember we went from configuration files to configuration objects using the XMLConfigBuilder constructor? This cachedEnabled is configured under the < Settings > tag in the mybatis-config. XML file. If not, mybatis defaults to True. I’ve posted the source code for the default configuration below, which includes cachedEnabled and the default ExecutorType, so you can search for answers (this method is in XMLConfigBuilder).

private void settingsElement(XNode context) throws Exception {
    if(context ! =null) {
      Properties props = context.getChildrenAsProperties();
      // Check that all settings are known to the configuration class
      // Check if there are corresponding setters in the Configuration class (no spelling errors)
      MetaClass metaConfig = MetaClass.forClass(Configuration.class);
      for (Object key : props.keySet()) {
        if(! metaConfig.hasSetter(String.valueOf(key))) {throw new BuilderException("The setting " + key + " is not known. Make sure you spelled it correctly (case sensitive)."); }}// Set the properties one by one
      // How to automatically map columns to fields/attributes
      configuration.setAutoMappingBehavior(AutoMappingBehavior.valueOf(props.getProperty("autoMappingBehavior"."PARTIAL")));
      / / cache
      configuration.setCacheEnabled(booleanValueOf(props.getProperty("cacheEnabled"), true));
      //proxyFactory (CGLIB | JAVASSIST)
      // The core technique of lazy loading is to use proxy mode, either CGLIB or JAVASSIST
      configuration.setProxyFactory((ProxyFactory) createInstance(props.getProperty("proxyFactory")));
      // Lazy loading
      configuration.setLazyLoadingEnabled(booleanValueOf(props.getProperty("lazyLoadingEnabled"), false));
      // Whether each attribute should be loaded on demand during lazy loading
      configuration.setAggressiveLazyLoading(booleanValueOf(props.getProperty("aggressiveLazyLoading"), true));
      // Multiple result sets are not allowed to return from a single statement
      configuration.setMultipleResultSetsEnabled(booleanValueOf(props.getProperty("multipleResultSetsEnabled"), true));
      // Use column labels instead of column names
      configuration.setUseColumnLabel(booleanValueOf(props.getProperty("useColumnLabel"), true));
      // Allow JDBC to support generated keys
      configuration.setUseGeneratedKeys(booleanValueOf(props.getProperty("useGeneratedKeys"), false));
      // Configure the default actuator
      configuration.setDefaultExecutorType(ExecutorType.valueOf(props.getProperty("defaultExecutorType"."SIMPLE")));
      // The timeout period
      configuration.setDefaultStatementTimeout(integerValueOf(props.getProperty("defaultStatementTimeout"), null));
      // Automatically map DB fields to camel Java properties (A_COLUMN-->aColumn)
      configuration.setMapUnderscoreToCamelCase(booleanValueOf(props.getProperty("mapUnderscoreToCamelCase"), false));
      // Use RowBounds on nested statements
      configuration.setSafeRowBoundsEnabled(booleanValueOf(props.getProperty("safeRowBoundsEnabled"), false));
      // Use session level cache by default
      configuration.setLocalCacheScope(LocalCacheScope.valueOf(props.getProperty("localCacheScope"."SESSION")));
      // Set jdbcType for null
      configuration.setJdbcTypeForNull(JdbcType.valueOf(props.getProperty("jdbcTypeForNull"."OTHER")));
      // Which methods of Object will trigger lazy loading
      configuration.setLazyLoadTriggerMethods(stringSetValueOf(props.getProperty("lazyLoadTriggerMethods"), "equals,clone,hashCode,toString"));
      // Use a secure ResultHandler
      configuration.setSafeResultHandlerEnabled(booleanValueOf(props.getProperty("safeResultHandlerEnabled"), true));
      // The scripting language used by the dynamic SQL generation language
      configuration.setDefaultScriptingLanguage(resolveClass(props.getProperty("defaultScriptingLanguage")));
      Setters for mapping objects or put methods for Map objects when Null values are in the result set. This setting is not valid for primitive types such as int, Boolean, etc.
      configuration.setCallSettersOnNulls(booleanValueOf(props.getProperty("callSettersOnNulls"), false));
      // Prefix of logger's name
      configuration.setLogPrefix(props.getProperty("logPrefix"));
      // Explicitly define what log framework to use, or default to automatic jar discovery
      configuration.setLogImpl(resolveClass(props.getProperty("logImpl")));
      // Configure the factory
      configuration.setConfigurationFactory(resolveClass(props.getProperty("configurationFactory"))); }}Copy the code

To summarize, a selectList method is called during some SELECT operations in SqlSession, This method retrieves the desired MappedStatement(user-specified MappedStatementId) from the Configuration object in the SqlSession. The retrieved MappedStatement object is then passed into the SqlSession field Executor’s query method to retrieve the result set and wrap the result set to return the list. In Executor initialization, the decorator pattern is used to build CachingExecutor and encapsulate the specified Executor.

How does Mybatis implement the query method step by step?

Get result set

We have seen that decorator mode is used in Executor, which means that the main thread will first execute the Query method in CachingExecutor, which will actually call the actual Executor that it wraps inside. This means that the CachingExecutor layer is only used for caching operations, and the actual execution is still the original executor.

Let’s take a look at what happens in CachingExecutor

@Override
public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler) throws SQLException {
BoundSql boundSql = ms.getBoundSql(parameterObject);
    // Pass a cachekey parameter to query
    CacheKey key = createCacheKey(ms, parameterObject, rowBounds, boundSql);
    return query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
}

@Override
public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql)
      throws SQLException {
    Cache cache = ms.getCache();
    To enable level 2 caching, you need to add a line to your SQL mapping file: 
      
    In simple terms, the CacheKey is checked and then delegated to the actual executor
    if(cache ! =null) {
      flushCacheIfRequired(ms);
      if (ms.isUseCache() && resultHandler == null) {
        ensureNoOutParams(ms, parameterObject, boundSql);
        @SuppressWarnings("unchecked")
        List<E> list = (List<E>) tcm.getObject(cache, key);
        if (list == null) {
          list = delegate.<E> query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
          tcm.putObject(cache, key, list); // issue #578 and #116
        }
        returnlist; }}// Finally call the delegate object's query method encapsulated inside
    return delegate.<E> query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
}
Copy the code

The source code also confirms what I said above, CachingExecutor does some caching. Let’s move on to the actual implementation. Ordinarily we call SimpleExecutor(the default setting is SimpleExecutor), where SimpleExecutor and other executors inherit from BaseExecutor. So the query method in BaseExecutor is called. I have put the UML diagram here again for your understanding.

@Override
public <E> List<E> query(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql) throws SQLException {
    ErrorContext.instance().resource(ms.getResource()).activity("executing a query").object(ms.getId());
    // If it is already closed, an error is reported
    if (closed) {
      throw new ExecutorException("Executor was closed.");
    }
    // Clear local cache first, then query. But only if the query stack is 0. To handle recursive calls
    if (queryStack == 0 && ms.isFlushCacheRequired()) {
      clearLocalCache();
    }
    List<E> list;
    try {
      // add one so that recursive calls to the top do not clear the local cache
      queryStack++;
      // Query from localCache with cacheKey
      list = resultHandler == null ? (List<E>) localCache.getObject(key) : null;
      if(list ! =null) {
        / / check the localCache cache, if localOutputParameterCache processing
        handleLocallyCachedOutputParameters(ms, key, parameter, boundSql);
      } else {
        // Check the main thread from the database herelist = queryFromDatabase(ms, parameter, rowBounds, resultHandler, key, boundSql); }}finally {
      // Empty the stack
      queryStack--;
    }
    if (queryStack == 0) {
      // Lazily loads all the elements in the queue
      for (DeferredLoad deferredLoad : deferredLoads) {
        deferredLoad.load();
      }
      // issue #601
      // Clear the lazy load queue
      deferredLoads.clear();
      if (configuration.getLocalCacheScope() == LocalCacheScope.STATEMENT) {
        // issue #482
    	// Clear the local cache if STATEMENT is usedclearLocalCache(); }}return list;
}

// Check from database
private <E> List<E> queryFromDatabase(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql) throws SQLException {
    List<E> list;
    // Put a placeholder in the cache first.
    localCache.putObject(key, EXECUTION_PLACEHOLDER);
    try {
      // The real execution method is here!!
      list = doQuery(ms, parameter, rowBounds, resultHandler, boundSql);
    } finally {
      // Finally remove the placeholder
      localCache.removeObject(key);
    }
    // add cache
    localCache.putObject(key, list);
    // If it is a stored procedure, the OUT parameter is also cached
    if (ms.getStatementType() == StatementType.CALLABLE) {
      localOutputParameterCache.putObject(key, parameter);
    }
    return list;
}

Copy the code

These two methods do some basic processing, but the real work is in the doQuery method. This method requires a subclass to implement, so we’ll dive into the actual subclass SimpleExecutor.

@Override
public <E> List<E> doQuery(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, BoundSql boundSql) throws SQLException {
    Statement stmt = null;
    try {
      // Get the configuration object
      Configuration configuration = ms.getConfiguration();
      Create a new StatementHandler
      // The ResultHandler is passed in
      The RoutingStatementHandler also uses decorator mode
      // You can see the source code for yourself
      StatementHandler handler = configuration.newStatementHandler(wrapper, ms, parameter, rowBounds, resultHandler, boundSql);
      // Prepare statements. This is the native JDBC code that you use to create statements
      stmt = prepareStatement(handler, ms.getStatementLog());
      // StatementHandler.query
      // The real thread is here
      return handler.<E>query(stmt, resultHandler);
    } finally{ closeStatement(stmt); }}Copy the code

The main line here is that the Query method of the StatementHandler is called, and since StatementHandler is an interface, what is actually called is the generated RoutingStatementHandler query method.

@Override
public <E> List<E> query(Statement statement, ResultHandler resultHandler) throws SQLException {
    // This is still decorator mode
    return delegate.<E>query(statement, resultHandler);
}
Copy the code

You’ll notice that decorator mode is used again (but nothing is done here, instead of calling it directly), and that the result is actually the Query method in PreparedStatementHandler.

@Override
public <E> List<E> query(Statement statement, ResultHandler resultHandler) throws SQLException {
    // Cast
    PreparedStatement ps = (PreparedStatement) statement;
    // This is the execution method in JDBC
    ps.execute();
    // we can get the ResultSet already, but we need to do the ResultSet type processing
    // The list is returned
    return resultSetHandler.<E> handleResultSets(ps);
}
Copy the code

Next called is the method in DefaultResultSetHandler that handles the result of the execution.

@Override
public List<Object> handleResultSets(Statement stmt) throws SQLException {
	ErrorContext.instance().activity("handling results").object(mappedStatement.getId());

	// The resultMap attribute of the 
	// This is what we mentioned in the previous article
	final List<Object> multipleResults = new ArrayList<>();

	int resultSetCount = 0;
	// get the first ResultSet and wrap the traditional JDBC ResultSet into a ResultSetWrapper object that contains the result column meta information
	// Get the result set and wrap it into the appropriate wrapper class
	ResultSetWrapper rsw = getFirstResultSet(stmt);

	// Get all the resultMaps to be mapped (separated by commas)
	List<ResultMap> resultMaps = mappedStatement.getResultMaps();
	// The number of ResultMaps to map
	int resultMapCount = resultMaps.size();
	validateResultMapsCount(rsw, resultMapCount);
	// Loop through each ResultMap, usually only one
	while(rsw ! =null && resultMapCount > resultSetCount) {
		// Get the result mapping information
		ResultMap resultMap = resultMaps.get(resultSetCount);
		// Process the result set
		MultipleResults is mapped to multipleResults based on the resultMap information
		The results of the result set will be stored as a list in multipleResults
		handleResultSet(rsw, resultMap, multipleResults, null);

		rsw = getNextResultSet(stmt);
		cleanUpAfterHandlingResultSet();
		resultSetCount++;
	}

	// The resultSets attribute corresponding to the < SELECT > tag is generally not used
	String[] resultSets = mappedStatement.getResultSets();
	if(resultSets ! =null) {
		while(rsw ! =null && resultSetCount < resultSets.length) {
			ResultMapping parentMapping = nextResultMaps.get(resultSets[resultSetCount]);
			if(parentMapping ! =null) {
				String nestedResultMapId = parentMapping.getNestedResultMapId();
				ResultMap resultMap = configuration.getResultMap(nestedResultMapId);
				handleResultSet(rsw, resultMap, null, parentMapping); } rsw = getNextResultSet(stmt); cleanUpAfterHandlingResultSet(); resultSetCount++; }}// If there is only one result set, fetch the first one directly from the multi-result set
	return collapseSingleResultList(multipleResults);
}
Copy the code

We are now ready to draw a simple flowchart for getting the result set and returning it.

Processing result set

In the getFirstResultSet method, we get a ResultSet using JDBC native code and encapsulate it into a wrapper class called ResultSetWrapper. Since what we return to the dao layer certainly cannot be a native ResultSet, we need to process the ResultSet further.

Then again, in DefaultResultSetHandler, our main task becomes what do we do with the result set?

// Process the result set
private void handleResultSet(ResultSetWrapper rsw, ResultMap resultMap, List<Object> multipleResults, ResultMapping parentMapping) throws SQLException {
    try {
      if(parentMapping ! =null) {
        handleRowValues(rsw, resultMap, null, RowBounds.DEFAULT, parentMapping);
      } else {
        if (resultHandler == null) {
          // If there is no resultHandler
          New DefaultResultHandler / /
          DefaultResultHandler defaultResultHandler = new DefaultResultHandler(objectFactory);
          // call your handleRowValues
          handleRowValues(rsw, resultMap, defaultResultHandler, rowBounds, null);
          // Get the list of records
          multipleResults.add(defaultResultHandler.getResultList());
        } else {
          // If there is a resultHandler
          handleRowValues(rsw, resultMap, resultHandler, rowBounds, null); }}}finally {
      // Don't forget to close the result set
      // issue #228 (close resultsets)closeResultSet(rsw.getResultSet()); }}Copy the code

If the resultHandler is null, generate a default DefaultResultHandler or use the original one, and eventually call the handleRowValues method for further processing.

MultipleResults = list (ResultHandler) {ResultHandler = list (ResultHandler) {ResultHandler = list (ResultHandler) {ResultHandler = list (ResultHandler) {ResultHandler = list (ResultHandler); The list field is then added to the multipleResults.

As you can see from the above, handleRowValues end up being called so the main line of data processing for the result set is still here.

private void handleRowValues(ResultSetWrapper rsw, ResultMap resultMap, ResultHandler resultHandler, RowBounds rowBounds, ResultMapping parentMapping) throws SQLException {
    // There is no need to use inline
    if (resultMap.hasNestedResultMaps()) {
      ensureNoRowBounds();
      checkResultHandler();
      handleRowValuesForNestedResultMap(rsw, resultMap, resultHandler, rowBounds, parentMapping);
    } else {
      // Continue processinghandleRowValuesForSimpleResultMap(rsw, resultMap, resultHandler, rowBounds, parentMapping); }}Copy the code

Because they do not involve other complex steps we are here in a direct analysis of general process, here will continue to call handleRowValuesForSimpleResultMap method of data processing.

private void handleRowValuesForSimpleResultMap(ResultSetWrapper rsw, ResultMap resultMap, ResultHandler
        resultHandler, RowBounds rowBounds, ResultMapping parentMapping) throws SQLException {
	DefaultResultContext<Object> resultContext = new DefaultResultContext<>();
	// Get the result set information
	ResultSet resultSet = rsw.getResultSet();
	// Use rowBounds paging information to perform logical paging (i.e. in-memory paging)
	// Don't worry about pagination
	skipRows(resultSet, rowBounds);
	while(shouldProcessMoreRows(resultContext, rowBounds) && ! resultSet.isClosed() && resultSet.next()) {// Discriminator is the child of the 
      
        tag
      
		ResultMap discriminatedResultMap = resolveDiscriminatedResultMap(resultSet, resultMap, null);
		// Encapsulate the query results into poJOs
		// RSW is a wrapper class for the result set, which encapsulates the information returned by the result
		// Create a POJO from the result set wrapper class
		Object rowValue = getRowValue(rsw, discriminatedResultMap, null);
		// Handle nested mapping of objectsstoreObject(resultHandler, resultContext, rowValue, parentMapping, resultSet); }}Copy the code

What the above method does is simply wrap the result set class and the class fields defined at the beginning (for example, this POJO is the Admin class with fields like Account and Password) into an Object entity class. The entity class is then stored in an area.

The getRowValue method is used to load the obtained Object into the multipleResults. Or how multipleResults are added to list fields in resultHandler (because this is where multipleResults are eventually added).

As you can see, the answer must be in the storeObject (because a resultHandler is passed in as a parameter), so I’m going to do a simple analysis here

private void storeObject(ResultHandler resultHandler, DefaultResultContext resultContext, Object rowValue, ResultMapping parentMapping, ResultSet rs) throws SQLException {
    if(parentMapping ! =null) {
      linkToParents(rs, parentMapping, rowValue);
    } else {
      // General processing herecallResultHandler(resultHandler, resultContext, rowValue); }}private void callResultHandler(ResultHandler resultHandler, DefaultResultContext resultContext, Object rowValue) {
    resultContext.nextResultObject(rowValue);
    // resultContext encapsulates some context information, including the encapsulated Object
    // Call a resultHandler object.
    resultHandler.handleResult(resultContext);
}

// This is done in DefaultResultHandler, so
// Add to the list field
@Override
public void handleResult(ResultContext context) {
    // Add the record to the List
    list.add(context.getResultObject());
}
Copy the code

Now that we’re ready to boldly analyze how to wrap the resulting POJO object, let’s go into the getRowValue method.

/ / the core, Private Object getRowValue(ResultSetWrapper RSW, ResultMap ResultMap) throws SQLException {// instantiate ResultLoaderMap(lazyLoader) final ResultLoaderMap lazyLoader = new ResultLoaderMap(); // Call your own createResultObject, inside is a new object (if it is a simple type, Object resultObject = createResultObject(RSW, resultMap, lazyLoader, null);if(resultObject ! = null && !typeHandlerRegistry. HasTypeHandler (resultMap. GetType ())) {/ / general is not a simple type there would be no typehandler, thisifAfter this step, metaObject holds resultObject. // Remember that final metaObject metaObject = configuration.newMetaObject(resultObject); boolean foundValues = ! resultMap.getConstructorResultMappings().isEmpty();if (shouldApplyAutomaticMappings(resultMap, falseFoundValues = foundValues = foundValues = foundValues = foundValues = foundValues = foundValues = foundValues = applyAutomaticMappings(rsw, resultMap, metaObject, null) || foundValues; FoundValues = applyPropertyMappings(RSW, resultMap, metaObject, lazyLoader, null) || foundValues; foundValues = lazyLoader.size() > 0 || foundValues; resultObject = foundValues ? resultObject : null;return resultObject;
    }
    return resultObject;
}
Copy the code

We performed one operation above: Using resultObject and some other things to construct an instance of metaObject, if you look at the source code you will find that the metaObject inside is holding resultObject, here because of the space is limited, I will not do too much explanation, interested in their own to view the source code. The whole idea is to create a shell of a POJO object from the previously defined configuration information, and then assign a value to the shell from the data in the result set.

Let’s see how Mybatis assigns values to empty shells.

private boolean applyPropertyMappings(ResultSetWrapper rsw, ResultMap resultMap, MetaObject metaObject, ResultLoaderMap lazyLoader, String columnPrefix)
      throws SQLException {
    // Get a list of field names
    final List<String> mappedColumnNames = rsw.getMappedColumnNames(resultMap, columnPrefix);
    boolean foundValues = false;
    final List<ResultMapping> propertyMappings = resultMap.getPropertyResultMappings();
    // Loop assignment
    for (ResultMapping propertyMapping : propertyMappings) {
      final String column = prependPrefix(propertyMapping.getColumn(), columnPrefix);
      if(propertyMapping.isCompositeResult() || (column ! =null&& mappedColumnNames.contains(column.toUpperCase(Locale.ENGLISH))) || propertyMapping.getResultSet() ! =null) {
        // Get the value of each field from the result set wrapper class
        // There is also an important class called TypeHandler
        Object value = getPropertyMappingValue(rsw.getResultSet(), metaObject, propertyMapping, lazyLoader, columnPrefix);
        // issue #541 make property optional
        final String property = propertyMapping.getProperty();
        // issue #377, call setter on nulls
        if(value ! = NO_VALUE && property ! =null&& (value ! =null || configuration.isCallSettersOnNulls())) {
          if(value ! =null| |! metaObject.getSetterType(property).isPrimitive()) {// The value is assigned to metaObject
            // Field assignment in resultObject
            metaObject.setValue(property, value);
          }
          foundValues = true; }}}return foundValues;
}
Copy the code

The whole process above is to get the names of the fields in the POJO class, and then get the values of the corresponding fields in the result set through the result set wrapper class ResultSetWarpper and assign the values to the corresponding fields in the resultObject “empty shell” object held by metaObject. So resultObject is not an empty shell. One of the important things I’ve highlighted above is this code.

Object value = getPropertyMappingValue(rsw.getResultSet(), metaObject, propertyMapping, lazyLoader, columnPrefix);
Copy the code

Have you ever wondered how to wrap the field’s original value into an Object?

Let’s come in and find out.

private Object getPropertyMappingValue(ResultSet rs, MetaObject metaResultObject, ResultMapping propertyMapping, ResultLoaderMap lazyLoader, String columnPrefix)
      throws SQLException {
    if(propertyMapping.getNestedQueryId() ! =null) {
      return getNestedQueryMappingValue(rs, metaResultObject, propertyMapping, lazyLoader, columnPrefix);
    } else if(propertyMapping.getResultSet() ! =null) {
      addPendingChildRelation(rs, metaResultObject, propertyMapping);
      return NO_VALUE;
    } else if(propertyMapping.getNestedResultMapId() ! =null) {
      // the user added a column attribute to a nested result map, ignore it
      return NO_VALUE;
    } else {
      // Here we are dealing with a TypeHandler
      // The ResultMapping original TypeHandler is obtained
      // the value is then processed by the TypeHandler TypeHandler
      finalTypeHandler<? > typeHandler = propertyMapping.getTypeHandler();final String column = prependPrefix(propertyMapping.getColumn(), columnPrefix);
      returntypeHandler.getResult(rs, column); }}Copy the code

This means that eventually some object conversions are implemented through TypeHandler, which is Mybatis’ main utility class for type conversions, as I mentioned in the previous article.

In this way, we have finished analyzing the basic execution process of the whole Mybatis. I found a good picture on the Internet, you can refer to my above analysis to understand (but I still suggest you debug to look at the source code). The picture is a little unclear, and no clear version has been found at present.

conclusion

To sum up, the whole Mybatis query process encapsulates the JDBC native code and does the result set type conversion that JDBC does not do for us.