Mybatis related concepts

Object/Relational Database Mapping (ORM)

ORM full name Object/Relation Mapping: Stands for object-relational Mapping. ORM completes the Mapping of object-oriented programming languages to relational databases. When the ORM framework is mapped, programmers can take advantage of both the simplicity and ease of use of object-oriented programming languages and the technical benefits of relational databases. ORM wraps relational databases as an object-oriented model. ORM framework is an intermediate solution when the development of object-oriented design language and relational database is not synchronized. Instead of directly accessing the underlying database, applications operate on persistent objects with object-oriented relaxation, and the ORM framework translates these object-oriented operations into the underlying SQL operations. ORM framework to achieve the effect: save, modify, delete persistent objects, and other operations, to the database operation

Introduction of Mybatis

MyBatis is an excellent ORM-based semi-automatic lightweight persistence layer framework (semi-automatic means MyBatis also needs to write its own SQL statements) that supports customized SQL, stored procedures and advanced mapping. MyBatis avoids almost all of the JDBC code and manual setting of parameters and fetching result sets. MyBatis can use simple XML or annotations to configure and map native types, interfaces, and Java’s Plain Old Java Objects (POJOs) to records in the database.

Mybatis history

IBatis is an open source project of Apache. In June 2010, this project was migrated to Google Code by Apache Software Foundation. As the development team moved to Google Code, Ibatis3.x was officially renamed Mybatis and the code was migrated to Github in November 2013.

The term iBATIS comes from the combination of “Internet” and “Abatis”. IBATIS is a Java-based persistence layer framework. IBATIS provides persistence layer frameworks including SQL Maps and Data Access Objects(DAO)

Mybatis advantage

Mybatis is a semi-automated persistence layer framework. For developers, core SQL still needs to be optimized by itself, SQL and Java coding are separated, functional boundaries are clear, one focuses on business, one focuses on data.

The analysis diagram is as follows:

Basic application of Mybatis

Quick start

MyBatis official website: www.mybatis.org/mybatis-3/

Development steps:

  • ① Add MyBatis coordinates

  • Create table user

  • ③ Write the User entity class

  • Compile the mapping file usermapper.xml

  • ⑤ Write the core file SQLmapconfig.xml

  • ⑥ Write test classes

Environment set up

1) Import the coordinates of MyBatis and other relevant coordinates


      
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.lagou</groupId>
    <artifactId>mybatis_quickStarter</artifactId>
    <version>1.0 the SNAPSHOT</version>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <maven.compiler.encoding>UTF-8</maven.compiler.encoding>
        <java.version>1.8</java.version>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
    </properties>

    <! -- Introducing dependencies -->
    <dependencies>
        <! - mybatis coordinates - >
        <dependency>
            <groupId>org.mybatis</groupId>
            <artifactId>mybatis</artifactId>
            <version>3.4.5</version>
        </dependency>
        <! -- Mysql driver -->
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>5.1.6</version>
            <scope>runtime</scope>
        </dependency>
        <! -- Unit test coordinates -->
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
        </dependency>
    </dependencies>
</project>
Copy the code

2) Create table user

3) Write the User entity

public class User {

    private int id;

    private String username;

    private String password;

    // omit the get set method

}
Copy the code

4) Write UserMapper mapping file


      
<! DOCTYPEmapper
        PUBLIC "- / / mybatis.org//DTD Mapper / 3.0 / EN"
        "http://mybatis.org/dtd/mybatis-3-mapper.dtd">

<mapper namespace="com.lagou.dao.IUserDao">
    <! ResultType: indicates the return value type -->  
    <! -- Query user -->
    <select id="findAll" resultType="uSeR">
       select * from user
    </select>
</mapper>
Copy the code

5) Compile MyBatis core file


      
<! DOCTYPEconfiguration PUBLIC "- / / mybatis.org//DTD Config / 3.0 / EN"
        "http://mybatis.org/dtd/mybatis-3-config.dtd">

<configuration>

    <! Load external properties file -->
    <properties resource="jdbc.properties"></properties>

    <! -- Give the fully qualified class name of the entity class to an alias -->
    <typeAliases>
        <! -- Alias separate entities -->
      <! -- <typeAlias type="com.lagou.pojo.User" alias="user"></typeAlias>-->
        <! -- Batch alias: the class name of all the classes in the package: alias is not case sensitive -->
        <package name="com.lagou.pojo"/>
    </typeAliases>

    <! -- Environments -->
    <environments default="development">
        <environment id="development">
            <! -- Current transaction managed by JDBC -->
            <transactionManager type="JDBC"></transactionManager>
            <! Mybatis -->
            <dataSource type="POOLED">
                <property name="driver" value="${jdbc.driver}"/>
                <property name="url" value="${jdbc.url}"/>
                <property name="username" value="${jdbc.username}"/>
                <property name="password" value="${jdbc.password}"/>
            </dataSource>
        </environment>
    </environments>

    <! -- Import mapping configuration file -->
    <mappers>
        <mapper resource="UserMapper.xml"></mapper>
    </mappers>
</configuration>
Copy the code

The jdbc.properties file is as follows:

jdbc.driver=com.mysql.jdbc.Driver
jdbc.url=jdbc:mysql:///zdy_mybatis
jdbc.username=root
jdbc.password=root
Copy the code

6) Write test code

public class MybatisTest {

    @Test
    public void test1(a) throws IOException {
        //1.Resources utility class, load configuration file, load configuration file as byte input stream
        InputStream resourceAsStream = Resources.getResourceAsStream("sqlMapConfig.xml");
        //2. Parse the configuration file and create the sqlSessionFactory
        SqlSessionFactory sqlSessionFactory = new SqlSessionFactoryBuilder().build(resourceAsStream);
        / / 3. The production of sqlSession
        SqlSession sqlSession = sqlSessionFactory.openSession();// A transaction is enabled by default, but it is not committed automatically
                                                                // Commit the transaction manually when performing add, delete, or change operations
        //4.sqlSession call method: Query all selectList query single: selectOne Add: INSERT Modify: update Delete: delete
        // statmentId is the namespace.id value in mapper. XML
        List<User> users = sqlSession.selectList("user.findAll");
        for(User user : users) { System.out.println(user); } sqlSession.close(); }}Copy the code

CRUD operation of MyBatis

Insert data operation of MyBatis

1) Write UserMapper mapping file

<mapper namespace="userMapper">
    <insert id="add" parameterType="com.lagou.domain.User">
        insert into user values(#{id},#{username},#{password})
    </insert>
</mapper>
Copy the code

2) Write code to insert the entity User

InputStream resourceAsStream = Resources.getResourceAsStream("SqlMapConfig.xml");
SqlSessionFactory sqlSessionFactory = new SqlSessionFactoryBuilder().build(resourceAsStream);
SqlSession sqlSession = sqlSessionFactory.openSession();
// statmentId of namespace+ ID
int insert = sqlSession.insert("userMapper.add", user);
System.out.println(insert);

// Commit the transaction
sqlSession.commit();
sqlSession.close();
Copy the code

3) Pay attention to problems in insertion operation

  • Insert statements use insert tags

  • Use the parameterType attribute in the mapping file to specify the data type to be inserted

  • Sql statements use #{entity attribute name} to refer to attribute values in entities

  • The API used for the insert operation is sqlSession.insert(” namespace.id “, entity object);

  • The insert operation involves database data changes, so use the commit transaction displayed by the sqlSession object, sqlSession.com MIT ()

Modify data operation of MyBatis

1) Write UserMapper mapping file

<mapper namespace="userMapper">
    <update id="update" parameterType="com.lagou.domain.User">
        update user set username=#{username},password=#{password} where id=#{id}
    </update>
</mapper>
Copy the code

2) Write code that modifies the entity User

 
InputStream resourceAsStream = Resources.getResourceAsStream("sqlMapConfig.xml");
SqlSessionFactory sqlSessionFactory = new SqlSessionFactoryBuilder().build(resourceAsStream);
SqlSession sqlSession = sqlSessionFactory.openSession();

User user = new User();
user.setId(4);
user.setUsername("lucy");
// statmentId of namespace+ ID
sqlSession.update("userMapper.updateUser",user);
sqlSession.commit();
sqlSession.close();
Copy the code

3) Pay attention to problems in the modification operation

  • Modify statements using the UPDATE tag

  • The API used for the modification operation is sqlsession.update (” namespace.id “, entity object);

Delete data operation of MyBatis

1) Write UserMapper mapping file

<mapper namespace="userMapper">
    <delete id="delete" parameterType="java.lang.Integer">
        delete from user where id=#{id}
    </delete>
</mapper>
Copy the code

2) Write code to delete data

InputStream resourceAsStream = Resources.getResourceAsStream("sqlMapConfig.xml");
SqlSessionFactory sqlSessionFactory = new SqlSessionFactoryBuilder().build(resourceAsStream);
SqlSession sqlSession = sqlSessionFactory.openSession();
sqlSession.delete("userMapper.deleteUser".6);
sqlSession.commit();
sqlSession.close();
Copy the code

3) Pay attention to problems in deletion operation

  • Delete statements use the DELETE tag

  • An Sql statement uses #{arbitrary string} to refer to a single parameter passed

  • The API used for the delete operation is sqlsession. delete(” namespace. id “,Object);

MyBatis mapping file overview

Getting started Analyzing core configuration files

MyBatis common configuration parsing

1) environments

Configure the database environment and support multi-environment configuration

There are two types of transactionManager:

  • JDBC: This configuration directly uses JDBC’s commit and rollback Settings, which rely on connections from the data source to manage transaction scopes.

  • MANAGED: This configuration does almost nothing. It never commits or rolls back a connection, but lets the container manage the entire life cycle of the transaction (such as the context of the JEE application server). By default it closes the connection, however some containers do not want this, so you need to set the closeConnection property to false to prevent its default closing behavior.

There are three types of dataSource:

  • UNPOOLED: The implementation of this data source simply opens and closes the connection each time it is requested.

  • POOLED: This data source implementation uses the concept of “pooling” to organize JDBC connection objects.

  • JNDI: This data source is implemented for use in a container such as an EJB or application server, which can centrally or externally configure the data source and then place a reference to the JNDI context.

2) mapper tag

The label is used to load mappings in the following ways

  • Use resource references relative to the classpath, for example:
<mapper resource="org/mybatis/builder/AuthorMapper.xml"/>
Copy the code
  • Use fully qualified resource locators (urls), for example:
<mapper url="file:///var/mappers/AuthorMapper.xml"/>
Copy the code
  • Implement the fully qualified class name of the class using the mapper interface, for example:
<mapper class="org.mybatis.builder.AuthorMapper"/>
Copy the code
  • Register all mapper interface implementations within a package as mapper, for example:
<package name="org.mybatis.builder"/>
Copy the code

Mybatis API introduction

SqlSessionFactoryBuilder SqlSessionFactoryBuilder

SqlSessionFactory Build (InputStream InputStream)

Build an SqlSessionFactory object by loading the input stream of the mybatis core file

String resource = "org/mybatis/builder/mybatis-config.xml";
InputStream inputStream = Resources.getResourceAsStream(resource);
SqlSessionFactoryBuilder builder = new SqlSessionFactoryBuilder();
SqlSessionFactory factory = builder.build(inputStream);
Copy the code

The Resources utility class is in the org.apache.ibatis. IO package. The Resources class helps you load resource files from the classpath, file system, or a Web URL.

SqlSessionFactory object SqlSessionFactory

SqlSessionFactory has several methods for creating SqlSession instances. There are two commonly used ones as follows:

SqlSession Session object

SqlSession instances are a very powerful class in MyBatis. Here you’ll see all the methods for executing statements, committing or rolling back transactions, and getting mapper instances.

The main methods for executing a statement are:

<T> T selectOne(String statement, Object parameter)
<E> List<E> selectList(String statement, Object parameter)
int insert(String statement, Object parameter)
int update(String statement, Object parameter)
int delete(String statement, Object parameter)
Copy the code

The main methods of handling transactions are:

void commit(a)
void rollback(a)
Copy the code

Mybatis Dao layer implementation

Traditional development

Write the UserDao interface

public interface UserDao {
    List<User> findAll(a) throws IOException;
}
Copy the code

Write the UserDaoImpl implementation

public class UserDaoImpl implements UserDao {

    public List<User> findAll(a) throws IOException {

        InputStream resourceAsStream = Resources.getResourceAsStream("SqlMapConfig.xml");
        SqlSessionFactory sqlSessionFactory = new SqlSessionFactoryBuilder().build(resourceAsStream);
        SqlSession sqlSession = sqlSessionFactory.openSession();
        List<User> userList = sqlSession.selectList("userMapper.findAll");
        sqlSession.close();
        returnuserList; }}Copy the code

Testing the traditional way

@Test
public void testTraditionDao(a) throws IOException {
    UserDao userDao = new UserDaoImpl();
    List<User> all = userDao.findAll();
    System.out.println(all);
}
Copy the code

Agent development approach

Introduction to agent development

The agent development method of Mybatis is used to realize the development of DAO layer, which is the mainstream of our enterprise. Mapper interface development method only requires programmers to write Mapper interface (equivalent to Dao interface), and Mybatis framework creates dynamic proxy object of interface according to interface definition. The method body of proxy object implements class method with Dao interface above.

Mapper interface development should follow the following specifications:

  • The namespace in the mapper. XML file has the same fully qualified name as the Mapper interface

  • The Mapper interface method name is the same as the ID of each statement defined in mapper.xml

  • The input parameter types for the Mapper interface methods are the same type as parameterType for each SQL defined in mapper.xml

  • The output parameter type of the Mapper interface method is the same type as the resultType of each SQL defined in mapper.xml

Write the UserMapper interface

Test agent mode

@Test
public void testProxyDao(a) throws IOException {

    InputStream resourceAsStream = Resources.getResourceAsStream("SqlMapConfig.xml");
    SqlSessionFactory sqlSessionFactory = new SqlSessionFactoryBuilder().build(resourceAsStream);
    SqlSession sqlSession = sqlSessionFactory.openSession();
    MyBatis framework generated UserMapper interface implementation class
    UserMapper userMapper = sqlSession.getMapper(UserMapper.class);
    User user = userMapper.findById(1);
    System.out.println(user);
    sqlSession.close();
}
Copy the code

Mybatis configuration file in depth

The core configuration file sqlmapconfig.xml

MyBatis core profile hierarchy

MyBatis common configuration parsing

1) environments

Refer to the above

2) mapper tag

Refer to the above

3) the Properties TAB

In practice, it is customary to extract the configuration information from the data source into a properties file, and this tag can load the properties file for additional configurations

4) typeAliases tag

A type alias is a short name for a Java type. The original type name is set as follows

Configure the typeAliases and define the alias for com.lagou.domain.User as User

However, if a project has hundreds of entity classes, it will have to configure more than a hundred tags. This can be changed to package configuration, which specifies that the entity class under the package uses the class name as the alias.

<! -- Give the fully qualified class name of the entity class to an alias -->
<typeAliases>
    <! -- Alias separate entities -->
  <! -- <typeAlias type="com.lagou.pojo.User" alias="user"></typeAlias>-->
    <! -- Batch alias: the class name of all the classes in the package: alias is not case sensitive -->
    <package name="com.lagou.pojo"/>
</typeAliases>
Copy the code

Above we are the custom alias, mybatis framework has set some common types of alias for us

Map configuration file mapper.xml

Dynamic SQL statement overview

In the mapping file of Mybatis, our SQL in front is relatively simple, sometimes when the business logic is complex, our SQL is dynamic

State change, at this time in the previous learning in our SQL can not meet the requirements.

Reference to the official documentation, described as follows:

Dynamic SQL if and WHERE

We use different SQL statements to query according to the different values of the entity class. For example, if id is not empty, you can check by ID

Query, if username is different empty also add username as a condition. This happens all the time in our multi-condition combinatorial queries

To the.

Write Dao layer interface as follows:

// multi-condition combination query: demonstrate if
public List<User> findByCondition(User user); 
Copy the code

If you use only if, you need to add the identity condition where 1 = 1.

<! SQL > select * from SQL;
<sql id="selectUser">
     select * from user
</sql>
<select id="findByCondition" parameterType="user" resultType="user">
    <include refid="selectUser"></include> where 1 = 1;
    <if test="id ! =null">
        and id = #{id}
    </if>
    <if test="username ! =null">
        and username = #{username}
    </if>
</select>
Copy the code

The WHERE tag provides the WHERE keyword and removes the first and keyword

So the above dynamic SQL can be rewritten as follows:

<select id="findByCondition" parameterType="user" resultType="user">
    <include refid="selectUser"></include>
    <where>
        <if test="id ! =null">
            and id = #{id}
        </if>
        <if test="username ! =null">
            and username = #{username}
        </if>
    </where>

</select>
Copy the code

If the query conditions ID and USERNAME exist, the console displays the following SQL statement:

When only id exists, the console prints the following SQL statement:

Foreach for dynamic SQL

Run the SQL concatenation operation IN a loop, for example, SELECT * FROM USER WHERE id IN (1,2,5).

Write Dao layer interface as follows:

// Multi-value query: demonstrates foreach
public List<User> findByIds(int[] ids);
Copy the code

XML is as follows:

<! Foreach -->
<select id="findByIds" parameterType="list" resultType="user">
    <include refid="selectUser"></include>
    <where>
        <foreach collection="array" open="id in (" close=")" item="id" separator=",">
            #{id}
        </foreach>
    </where>
</select>
Copy the code

The attributes of the foreach tag have the following meanings:

  • Collection: represents a collection element to iterate over. Do not write #{}

  • Open: indicates the beginning of a statement

  • Close: indicates the end part

  • Item: The name of the variable generated by traversing each element of the collection

  • Sperator: represents the separator

The test code snippet is as follows:

Include for SQL fragment extraction

The repeated Sql can be extracted from THE Sql and referenced with include to achieve the purpose of Sql reuse.

<! SQL > select * from SQL;
<sql id="selectUser">
     select * from user
</sql>
<! Foreach -->
<select id="findByIds" parameterType="list" resultType="user">
    <include refid="selectUser"></include>
    <where>
        <foreach collection="array" open="id in (" close=")" item="id" separator=",">
            #{id}
        </foreach>
    </where>
</select>
Copy the code

Mybatis complex mapping development

One-to-one query

One-to-one query model

The relationship between the user table and the order table is that one user has multiple orders, and one order belongs to only one user.

One to one query requirement: query an order, and at the same time query the user that the order belongs to

One-to-one query statement

Select * from orders o,user u where o.id = U. ID;

The query result is as follows:

Create Order and User entities

public class Order {

    private Integer id;
    private String orderTime;
    private Double total;
    
    // Indicate which user the order belongs to
    private User user;
}
Copy the code
public class User {
    private int id;
    private String username;
    private String password;
    private Date birthday;
}
Copy the code

Create the OrderMapper interface

public interface OrderMapper {
    List<Order> findAll(a);
}
Copy the code

Configuration OrderMapper. XML

<mapper namespace="com.lagou.mapper.IOrderMapper">
    <! -- private Integer id; private String orderTime; private Double total; -->

    <resultMap id="orderMap" type="com.lagou.pojo.Order">
        <result property="id" column="id"></result>
        <result property="orderTime" column="orderTime"></result>
        <result property="total" column="total"></result>

        <association property="user" javaType="com.lagou.pojo.User">
            <result property="id" column="uid"></result>
            <result property="username" column="username"></result>
        </association>
    </resultMap>

    <! --resultMap: manually configure the mapping between entity attributes and table fields -->
    <select id="findOrderAndUser" resultMap="orderMap">
        select * from orders o,user u where o.uid = u.id
    </select>
</mapper>
Copy the code

The essence is to map column names from SQL queries to attributes of entity classes.

The test results

One-to-many query

A one-to-many query model

The relationship between the user table and the order table is that one user has multiple orders, and one order belongs to only one user

One to many query requirements: query a user and query the orders that the user has at the same time.

A statement for a one-to-many query

Select *, O. id from user u left join Orders o on U. ID = O.id; select *, O. id from user u left join Orders o on U. ID = o.id;

The query result is as follows:

Modifying the User entity

public class Order {

    private int id;
    private Date ordertime;
    private double total;

    // Represents the customer to which the current order belongs
    private User user;
}

public class User {

    private int id;
    private String username;
    private String password;
    private Date birthday;
    // Represents which orders the current user has
    private List<Order> orderList;
}
Copy the code

Create the UserMapper interface

public interface UserMapper {
    List<User> findAll(a);
}
Copy the code

Configuration UserMapper. XML

<mapper namespace="com.lagou.mapper.UserMapper">
    <resultMap id="userMap" type="com.lagou.domain.User">
        <result column="id" property="id"></result>
        <result column="username" property="username"></result>
        <result column="password" property="password"></result>
        <result column="birthday" property="birthday"></result>
        <collection property="orderList" ofType="com.lagou.domain.Order">
            <result column="oid" property="id"></result>
            <result column="ordertime" property="ordertime"></result>
            <result column="total" property="total"></result>
        </collection>
    </resultMap>

    <select id="findAll" resultMap="userMap">
        select *,o.id oid from user u left join orders o on u.id=o.uid
    </select>
</mapper>
Copy the code

The test results

Many-to-many query

Many-to-many query model

The relationship between a user table and a role table is that a user has multiple roles and a role is used by multiple users

Many-to-many query: All roles of the user are queried at the same time

Many-to-many query statement

SQL statement: select u.,r.,r.id rid from user u left join user_role ur on u.id=ur.user_id inner join role r on ur.role_id=r.id;

The query result is as follows

Create the Role entity and modify the User entity

public class User {

    private int id;
    private String username;
    private String password;
    private Date birthday;

    // Represents which orders the current user has
    private List<Order> orderList;
    // Indicates the roles of the current user
    private List<Role> roleList;
}

public class Role {
    private int id;
    private String rolename;
}
Copy the code

Add the UserMapper interface method

List<User> findAllUserAndRole(a);
Copy the code

Configuration UserMapper. XML

<resultMap id="userRoleMap" type="com.lagou.pojo.User">
    <result property="id" column="userid"></result>
    <result property="username" column="username"></result>
    <collection property="roleList" ofType="com.lagou.pojo.Role">
        <result property="id" column="roleid"></result>
        <result property="roleName" column="roleName"></result>
        <result property="roleDesc" column="roleDesc"></result>
    </collection>
</resultMap>


<select id="findAllUserAndRole" resultMap="userRoleMap">
    select * from user u left join sys_user_role ur on u.id = ur.userid
                   left join sys_role r on r.id = ur.roleid
</select>
Copy the code

The test results

Mybatis annotation development

MyBatis common annotations

Annotation development has become more and more popular in recent years, Mybatis can also use annotation development method, so we can write Mapper less

Mapped file. We’ll start with some basic CRUD and then look at complex mapped multi-table operations.

  • @insert: Implement new additions

  • @update: Implements updates

  • @delete: Implements the deletion

  • @select: Implements the query

  • @result: Implement Result set encapsulation

  • @results: Can be used with @result to encapsulate multiple Result sets

  • @ONE: Implement one-to-one result set encapsulation

  • @many: Implement one-to-many result set encapsulation

MyBatis add delete change check

The code in the DAO interface is as follows:

// Add a user
@Insert("insert into user values(#{id},#{username})")
public void addUser(User user);

// Update the user
@Update("update user set username = #{username} where id = #{id}")
public void updateUser(User user);

// Query the user
@Select("select * from user")
public List<User> selectUser(a);

// Delete the user
@Delete("delete from user where id = #{id}")
public void deleteUser(Integer id);
Copy the code

Note that sqlmapconfig. XML should specify the path of mapper interface, not mapper.xml, so the following configuration must be used:

<! -- Import mapping configuration file -->
<mappers>
   <! -- <mapper class="com.lagou.mapper.IUserMapper"></mapper>-->
    <package name="com.lagou.mapper"/>
</mappers>
Copy the code

With class, you have to write one for each mapper. Using package directly is convenient. Note that mapper. XML must be different from mapper’s path. Otherwise, repeated errors may occur.

MyBatis annotations for complex mapping development

Complex relational mapping can be implemented by configuration in the mapping file before implementation, and can be used after annotation development

@results annotations, @result annotations, @one annotations, and @many annotations are combined to complete the configuration of complex relationships.

One-to-one query

One-to-one query model

The relationship between the user table and the order table is that one user has multiple orders, and one order belongs to only one user

One to one query requirement: query an order, and at the same time query the user that the order belongs to.

One-to-one query statement

SQL statement:

select * from orders;

select * from user where id=Query the UID of the order;Copy the code

The query result is as follows:

Create Order and User entities

public class Order {

    private int id;

    private Date ordertime;

    private double total;

    // Represents the customer to which the current order belongs
    private User user;
}

public class User {

    private int id;

    private String username;

    private String password;

    private Date birthday;

}
Copy the code

Create the OrderMapper interface

@Results({ @Result(property = "id",column = "id"), @Result(property = "orderTime",column = "orderTime"), @Result(property = "total",column = "total"), @Result(property = "user",column = "uid",javaType = User.class, one=@One(select = "com.lagou.mapper.IUserMapper.findUserById")) })
@Select("select * from orders")
public List<Order> findOrderAndUser(a);

Copy the code
// Query users by id
@Select({"select * from user where id = #{id}"})
public User findUserById(Integer id);
Copy the code

Query analysis diagram is as follows:

The test results

@Test
public void testSelectOrderAndUser(a) {
    List<Order> all = orderMapper.findAll();
    for(Order order : all){ System.out.println(order); }}Copy the code

One-to-many query

A one-to-many query model

The relationship between the user table and the order table is that one user has multiple orders, and one order belongs to only one user

One to many query requirements: query a user and query the orders that the user has at the same time

A statement for a one-to-many query

SQL statement:

select * from user;

select * from orders where uid=The user ID is displayed.Copy the code

The query result is as follows:

Modifying the User entity

public class Order {

    private int id;

    private Date ordertime;

    private double total;

    // Represents the customer to which the current order belongs

    private User user;

}

public class User {

    private int id;

    private String username;

    private String password;

    private Date birthday;

    // Represents which orders the current user has

    private List<Order> orderList;
Copy the code

Create the UserMapper interface

// Query all users and the order information associated with each user
@Select("select * from user")
@Results({ @Result(property = "id",column = "id"), @Result(property = "username",column = "username"), @Result(property = "orderList",column = "id",javaType = List.class, many=@Many(select = "com.lagou.mapper.IOrderMapper.findOrderByUid")) })
public List<User> findAll(a);
Copy the code
@Select("select * from orders where uid = #{uid}")
public List<Order> findOrderByUid(Integer uid);
Copy the code

The test results

Many-to-many query

Many-to-many query model

The relationship between a user table and a role table is that a user has multiple roles and a role is used by multiple users

Many-to-many query: All roles of the user are queried at the same time.

Many-to-many query statement

SQL statement:

select * from user;

select * from role r,user_role ur where r.id=ur.role_id and ur.user_id=The user idCopy the code

Create the Role entity and modify the User entity

public class User {

    private int id;

    private String username;

    private String password;
    
    private Date birthday;
    
    // Represents which orders the current user has
    private List<Order> orderList;

    // Indicates the roles of the current user
    private List<Role> roleList;
}

public class Role {
    private int id;
    private String rolename;
}
Copy the code

Add the UserMapper interface method

// Query information about all users and roles associated with each user
@Select("select * from user")
@Results({ @Result(property = "id",column = "id"), @Result(property = "username",column = "username"), @Result(property = "roleList",column = "id",javaType = List.class, many = @Many(select = "com.lagou.mapper.IRoleMapper.findRoleByUid")) })
public List<User> findAllUserAndRole(a);
Copy the code
@Select("select * from sys_role r,sys_user_role ur where r.id = ur.roleid and ur.userid = #{uid}")
public List<Role> findRoleByUid(Integer uid);
Copy the code

The test results

Mybatis cache

Level 1 cache

SQL > select * from User; select * from User; select * from User

@Test

public void test1(a){
    // Generate a session according to sqlSessionFactory
    SqlSession sqlSession = sessionFactory.openSession();
    UserMapper userMapper = sqlSession.getMapper(UserMapper.class);

    // For the first query, issue a SQL statement and put the result of the query into the cache
    User u1 = userMapper.selectUserByUserId(1);
    System.out.println(u1);

    // The second query, since it is the same sqlSession, will query the result in the cache
    // If yes, it is fetched directly from the cache and does not interact with the database
    User u2 = userMapper.selectUserByUserId(1);
    System.out.println(u2);
    sqlSession.close();
}
Copy the code

View the console print:

SQL > select * from user where user = ‘user’;

@Test

public void test2(a){
    // Generate a session according to sqlSessionFactory
    SqlSession sqlSession = sessionFactory.openSession();
    UserMapper userMapper = sqlSession.getMapper(UserMapper.class);

    // For the first query, issue an SQL statement and put the query result into the cache
    User u1 = userMapper.selectUserByUserId( 1 );
    System.out.println(u1);

    SqlSession.com MIT ()
    u1.setSex("Female");
    userMapper.updateUserByUserId(u1);
    sqlSession.commit();

    // The second query, since it is the same sqlSession.com MIT (), will clear the cache information
    // The query will also issue a SQL statement
    User u2 = userMapper.selectUserByUserId(1);
    System.out.println(u2);
    sqlSession.close();
}
Copy the code

View the console print:

(3),

  • When you query information about the user whose ID is 1 for the first time, check whether there is information about the user whose ID is 1 in the cache. If there is no information about the user, query the user information from the database. Get the user information and store the user information in the level 1 cache.

  • If a commit operation (insert, update, or delete) is performed in an intermediate sqlSession, the level-1 cache in the sqlSession will be cleared. The purpose of this operation is to store the latest information in the cache and avoid dirty reads.

  • When querying user information about user 1 for the second time, check whether there is user information about user 1 in the cache. If there is user information about user 1 in the cache, obtain user information from the cache.

  • A manual call to sqlsession.clearCache () also clears level 1 cache.

StatmentId (namespace.id),param (placeholder parameter values for SQL),BoundSql, rowBounds(page objects)

Value is the result of the query

Level 1 cache principle exploration and source code analysis

What exactly is level 1 caching? When is level 1 cache created and what is the workflow of level 1 cache? In this section, we’ll take a look at the nature of level 1 caching

The way to think about it is, we’ve been talking about level 1 caching, so if we’re talking about level 1 caching, we’re not going to bypass SqlSession, so we’re going to go directly to SqlSession and see if there’s any cache creation or cache-related properties or methods.

Investigation and found that all of the above method, as if only clearCache () and cache any relationship, then directly from this method, this paper analyzed the source, we want to see it (this) is who, and who its parent and child classes respectively, to understand the above relationship, you will to have a deeper understanding of this class, a circle is analyzed, You might get a flow chart like this.

Digging deeper, when a process goes to the clear() method in Perpetualcache, it calls its cache.clear() method. What is that cache? Private Map cache = new HashMap(); Cache.clear () = map.clear(); cache.clear() = map.clear(); cache.clear() = map.clear()

Where do you think the most likely place to create a cache is? I think Executor. Why do you think so? Because an Executor is an Executor that executes SQL requests, and the method to clear the cache is also executed in an Executor, it’s very likely that the cache was created in an Executor, and you looked around and found that there was a createCacheKey method in the Executor, The createCacheKey method is executed by BaseExecutor, and the code is as follows:

public CacheKey createCacheKey(MappedStatement ms, Object parameterObject, RowBounds rowBounds, BoundSql boundSql) {
  if (closed) {
    throw new ExecutorException("Executor was closed.");
  }
  CacheKey cacheKey = new CacheKey();
  // The id of MappedStatement is statmentId
  cacheKey.update(ms.getId());
  // The offset of paging
  cacheKey.update(rowBounds.getOffset());
  // Number of entries per page
  cacheKey.update(rowBounds.getLimit());
  // The actual execution of the SQL statement
  cacheKey.update(boundSql.getSql());
  List<ParameterMapping> parameterMappings = boundSql.getParameterMappings();
  TypeHandlerRegistry typeHandlerRegistry = ms.getConfiguration().getTypeHandlerRegistry();
  // mimic DefaultParameterHandler logic
  for (ParameterMapping parameterMapping : parameterMappings) {
    if(parameterMapping.getMode() ! = ParameterMode.OUT) { Object value; String propertyName = parameterMapping.getProperty();if (boundSql.hasAdditionalParameter(propertyName)) {
        value = boundSql.getAdditionalParameter(propertyName);
      } else if (parameterObject == null) {
        value = null;
      } else if (typeHandlerRegistry.hasTypeHandler(parameterObject.getClass())) {
        value = parameterObject;
      } else{ MetaObject metaObject = configuration.newMetaObject(parameterObject); value = metaObject.getValue(propertyName); } cacheKey.update(value); }}if(configuration.getEnvironment() ! =null) {
    // issue #176
    // the id value in the 
      
        tag in sqlmapconfig. XML, which specifies the data source
      
    cacheKey.update(configuration.getEnvironment().getId());
  }
  return cacheKey;
}
Copy the code

Taking a look at the update methods in CacheKey:

The udate method is performed by a CacheKey object. This update method is finally used by the updateList list to store five values in it

Here need to pay attention to the last value, the configuration, getEnvironment (). The getId () what’s this, this is defined in the SqlMapConfig in XML tags, see below.

<environments default="development">// Is the id value, namely development<environment id="development">
    <transactionManager type="JDBC"/>
    <dataSource type="POOLED">
        <property name="driver" value="${jdbc.driver}"/>
        <property name="url" value="${jdbc.url}"/>
        <property name="username" value="${jdbc.username}"/>
        <property name="password" value="${jdbc.password}"/>
    </dataSource>
    </environment>
</environments>
Copy the code

So let’s get back to business, so what should we do with the cache once we’ve created it? You don’t create a cache out of thin air, do you? Absolutely not. After our exploration of level 1 cache, we found that level 1 cache is more used for query operations, after all, level 1 cache is also called query cache, why is it called query cache we’ll talk about later. Let’s take a look at where the cache is used. We trace the BaseExecutor query method as follows:

If you can’t find it, look it up in the database, in queryFromDatabase, and write to localCache. The put method of the localCache object is ultimately handed over to Map for storage

The second level cache

Level 2 caching works the same way as Level 1 caching. The first query puts the data into the cache, and the second query directly fetches the data from the cache. But level cache is based on the sqlSession, while the second level cache is based on the namespace mapper files, which means more sqlSession can share a level 2 cache region in the mapper, and if the two same namespace mapper, Even if there are two Mappers, the data from the two Mappers will be stored in the same level 2 cache area.

Note: MyBatis level 1 cache is enabled by default, but level 2 cache is not. It needs to be configured manually

① The annotation-based method is simply demonstrated as follows:

1. Enable level 2 caching in sqlmapconfig. XML configuration file (which is true by default)

<! -- Enable level 2 cache -->
<settings>< setting name = "cacheEnabled" value = "true" / ></settings>
Copy the code

2. Add @cachenamespace to the IUserMapper interface as follows:

@CacheNamespace// Enable level 2 cache
public interface IUserMapper {
 
    // Query users by id
    @Select({"select * from user where id = #{id}"})
    public User findUserById(Integer id);
}
Copy the code

3. The entity class must implement serialization interface Serializable, as follows;

public class User implements Serializable {
  // Attribute omitted
Copy the code

This is to take the cache data out to perform deserialization operation, because the secondary cache data storage media are various, not only in memory, may be stored in the hard disk, if we want to fetch the cache again, we need to deserialize. So poJOs in Mybatis implement Serializable interface.

4. Write test classes as follows:

/** * Note: Level 2 caches data in objects, not objects */
@Test
public void SecondLevelCache(a){
    SqlSession sqlSession1 = sqlSessionFactory.openSession();
    SqlSession sqlSession2 = sqlSessionFactory.openSession();
     
    IUserMapper mapper1 = sqlSession1.getMapper(IUserMapper.class);
    IUserMapper mapper2 = sqlSession2.getMapper(IUserMapper.class);
    
    User user1 = mapper1.findUserById(1);
    sqlSession1.close(); // Clear level 1 cache

    User user2 = mapper2.findUserById(1);
    System.out.println(user1==user2);
}
Copy the code

The demo found that no SQL was printed in the second query and the log showed a cache hit, indicating that the first query stored the data in the database to the second-level cache, and the second query directly fetched the data from the cache. But the address of the User object printed in the above code is different. Why?

Because the level-2 cache caches the data of an object, rather than the object itself, that is, when retrieving data from the level-2 cache, the previously cached data is encapsulated as a new object and returned.

If you add, delete, or transfer a transaction between two queries, the second level cache will also be cleared, as shown below:

/** * Note: Level 2 caches data in objects, not objects */
@Test
public void SecondLevelCache(a){
    SqlSession sqlSession1 = sqlSessionFactory.openSession();
    SqlSession sqlSession2 = sqlSessionFactory.openSession();
    SqlSession sqlSession3 = sqlSessionFactory.openSession();

    IUserMapper mapper1 = sqlSession1.getMapper(IUserMapper.class);
    IUserMapper mapper2 = sqlSession2.getMapper(IUserMapper.class);
    IUserMapper mapper3 = sqlSession3.getMapper(IUserMapper.class);

    User user1 = mapper1.findUserById(1);
    sqlSession1.close(); // Clear level 1 cache


    User user = new User();
    user.setId(1);
    user.setUsername("lisi");
    mapper3.updateUser(user);
    sqlSession3.commit();

    User user2 = mapper2.findUserById(1);
    System.out.println(user1==user2);
}
Copy the code

Both queries in the code above will query the database because the update operation is added in between, clearing the secondary cache.

UseCache and flushCache

You can also specify that each query method does not use level 2 caching (that is, the query disables level 2 caching), using @options (useCache = false). The code is as follows:

The default implementation class for MyBatis' own level 2 cache is PerpetualCache. * /
@CacheNamespace(implementation = RedisCache.class)// Enable level 2 cache
public interface IUserMapper {

    // This query disables level 2 caching
    @Options(useCache = false)
    @Select({"select * from user where id = #{id}"})
    public User findUserById(Integer id);
}
Copy the code

In the same namespace of the Mapper, if other insert, update, or delete operations are performed, the cache needs to be refreshed. If the cache is not refreshed, dirty reads will occur. Set the flushCache=”true “attribute in the statement configuration. The default value is true, that is, the cache is flushed. If the value is false, the cache is not flushed. If you manually modify the query data in a database table while using caching, dirty reads will occur. In general, the cache is flushed after the commit operation. FlushCache =true flusher the cache to avoid dirty database reads. So we don’t have to set it, just default.

② The xmL-based method is simply demonstrated as follows:

1. Enable level 2 caching in the sqlmapconfig. XML configuration file (which is also true by default), as annotated.

<! -- Enable level 2 cache -->
<settings>< setting name = "cacheEnabled" value = "true" / ></settings>>
Copy the code

2. Enable caching in the usermapper. XML file

<! -- Enable level 2 cache -->
<cache></cache>
Copy the code

The mapper. XML file has an empty label, but it is possible to configure PerpetualCache, which is myBatis’ default caching class. We do not write type to use mybatis default Cache, also can implement Cache interface custom Cache.

public class PerpetualCache implements Cache {
    private final String id;
    private MapcObject, Object> cache = new HashMapC);
    public PerpetualCache(St ring id) { this.id = id;
}
Copy the code

We can see that the underlying level 2 cache is still a HashMap structure.

Level 2 cache integration redis

Above we introduced mybatis own level 2 cache, but this cache is a single server work, can not achieve distributed cache. So what is distributed caching? If a user accesses server 1, the cache will be stored on server 1. If a user accesses server 2, the cache will not be available on server 2, as shown in the following figure:

To solve this problem, we need to find a distributed cache, which is specially used to store cached data, so that different servers can store the cached data there, and also fetch the cached data from it, as shown in the following figure:

As shown in the figure above, we use a third-party caching framework between several different servers. We put all the caches in this third-party framework, and then we can fetch data from the caches no matter how many servers there are.

Here we introduce the integration of Mybatis and Redis. As mentioned earlier, Mybatis provides an eache interface. If you want to implement your own cache logic, you can implement the cache interface development.

Mybatis implements one of its own by default, PerpetualCache, but the implementation of this cache does not work with distributed caching, so we implement it ourselves. Mybatis provides a redis implementation class for cache interface, which exists in mybatis- Redis package implementation.

Integration step 1: Introduce the dependency of MyBAatis and Redis integration as follows

<dependency>
    <groupId>org.mybatis.caches</groupId>
    <artifactId>mybatis-redis</artifactId>
    <version>1.0.0 beta 2 -</version>
</dependency>
Copy the code

Consolidation Step 2: Specify MyBatis level 2 cache implementation class in mapper.xml


      

<! DOCTYPEmapper PUBLIC "- / / mybatis.org//DTD Mapper / 3.0 / EN"

"http://mybatis.org/dtd/mybatis-3-mapper.dtd">

<mapper namespace="com.lagou.mapper.IUserMapper">
    <cache type="org.mybatis.caches.redis.RedisCache" />

    <select id="findAll" resultType="com.lagou.pojo.User" useCache="true">
        select * from user
    </select>
Copy the code

Or specify it in the form of annotations, as shown in the IUserMapper interface

The default implementation class for MyBatis' own level 2 cache is PerpetualCache. * /
@CacheNamespace(implementation = RedisCache.class)// Enable level 2 cache
public interface IUserMapper {

    // Query users by id
    @Options(useCache = true)
    @Select({"select * from user where id = #{id}"})
    public User findUserById(Integer id);
}
Copy the code

RedisCache step 3: Create a redis.properties file under the resource directory.

redis.host=localhost
redis.port=6379
redis.connectionTimeout=5000
redis.password=
redis.database=0
Copy the code

Integration Step 4: Testing

/** * Note: Level 2 caches data in objects, not objects */
@Test
public void SecondLevelCache(a){
    SqlSession sqlSession1 = sqlSessionFactory.openSession();
    SqlSession sqlSession2 = sqlSessionFactory.openSession();
    SqlSession sqlSession3 = sqlSessionFactory.openSession();

    IUserMapper mapper1 = sqlSession1.getMapper(IUserMapper.class);
    IUserMapper mapper2 = sqlSession2.getMapper(IUserMapper.class);
    IUserMapper mapper3 = sqlSession3.getMapper(IUserMapper.class);

    User user1 = mapper1.findUserById(1);
    sqlSession1.close(); // Clear level 1 cache

    User user = new User();
    user.setId(1);
    user.setUsername("lisi");
    mapper3.updateUser(user);
    sqlSession3.commit();

    User user2 = mapper2.findUserById(1);
    System.out.println(user1==user2);
}
Copy the code

RedisCache source code analysis

RedisCache is similar to the popular Mybatis Cache PerpetualCache scheme, which implements the Cache interface and operates the Cache using Jedis; But there are some differences in design details;

public RedisCache(String id) {
    if (id == null) {
        throw new IllegalArgumentException("Cache instances require an ID");
    } else {
        this.id = id;
        RedisConfig redisConfig = RedisConfigurationBuilder.getInstance().parseConfiguration();
        pool = newJedisPool(redisConfig, redisConfig.getHost(), redisConfig.getPort(), redisConfig.getConnectionTimeout(), redisConfig.getSoTimeout(), redisConfig.getPassword(), redisConfig.getDatabase(), redisConfig.getClientName()); }}Copy the code

RedisCache is created by mybatis CacheBuilder when mybatis is started. It is simply created by calling RedisCache(String id); And in the constructor of RedisCache, call the RedisConfigurationBuilder to create RedisConfig object, and USES to create JedisPool RedisConfig. The RedisConfig class inherits JedisPoolConfig and provides a wrapper for properties like host and port.

Is founded by RedisConfigurationBuilder RedisConfig object, simply look at the main methods in this class:

public RedisConfig parseConfiguration(ClassLoader classLoader) {
   Properties config = new Properties();
   
   // The redis.properties file is loaded in the resource directory
   InputStream input = classLoader.getResourceAsStream(redisPropertiesFilename);
   if(input ! =null) {
      try {
         config.load(input);
      } catch (IOException e) {
         throw new RuntimeException(
               "An error occurred while reading classpath property '"
                     + redisPropertiesFilename
                     + "', see nested exceptions", e);
      } finally {
         try {
            input.close();
         } catch (IOException e) {
            // close quietly}}}/ / create RedisConfig
   RedisConfig jedisConfig = new RedisConfig();
   setConfigProperties(config, jedisConfig);
   return jedisConfig;
}
Copy the code

RedisPropertiesFilename = redisPropertiesFilename; redisPropertiesFilename = redisPropertiesFilename

Next, RedisCache completes the edisPool with the RedisConfig class. RedisCache implements a simple template method to manipulate Redis:

private Object execute(RedisCallback callback) {
  Jedis jedis = pool.getResource();
  try {
    return callback.doWithRedis(jedis);
  } finally{ jedis.close(); }}Copy the code

Let’s look at the two most important methods in Cache, putObject and getObject, to see if Mybatis – Redis stores data in a Hash format.

@Override
public void putObject(final Object key, final Object value) {
  execute(new RedisCallback() {
    @Override
    public Object doWithRedis(Jedis jedis) {
      jedis.hset(id.toString().getBytes(), key.toString().getBytes(), SerializeUtil.serialize(value));
      return null; }}); }@Override
public Object getObject(final Object key) {
  return execute(new RedisCallback() {
    @Override
    public Object doWithRedis(Jedis jedis) {
      returnSerializeUtil.unserialize(jedis.hget(id.toString().getBytes(), key.toString().getBytes())); }}); }Copy the code

Mybatis -redis uses the cache ID as the hash key when storing data. SerializeUtil, like any other serialization class, is responsible for serializing and deserializing objects.

Mybatis plug-in

Plug-in profile

In general, open source frameworks provide plug-ins or other extensions that developers can extend themselves. The benefits are obvious. One is increased flexibility of the framework. Second, developers can expand the framework based on actual needs to make it work better. Taking MyBatis as an example, we can realize functions such as paging, table splitting and monitoring based on MyBati S plug-in mechanism. Because plug-ins are irrelevant to the business, the business is not aware of the existence of plug-ins. Therefore, plug-ins can be inserted without feeling and enhance functions virtually.

Mybatis plug-in introduction

Mybati S is an excellent ORM open source framework with great flexibility. It provides an easy-to-use plug-in extension mechanism at four major components: Executor, StatementHandler, ParameterHandler, ResultSetHandler. Mybatis operates on the persistence layer with the help of four core objects. MyBatis supports the interception of four core objects by plug-ins. For MyBatis, plug-ins are interceptors, which are used to enhance the function of core objects. The enhancement function is essentially realized by means of the underlying dynamic proxy.

MyBatis allows the following interception methods:

  • Executor (update, Query, commit, rollback, etc.);

  • SQL syntax builder StatementHandler (prepare, parameterize, Batch, updates Query, etc.);

  • ParameterHandler (getParameterObject, setParameters methods);

  • Result set processor ResultSetHandler (handleResultSets, handleOutputParameters, etc.);

Mybatis plugin principle

When the big four objects are created

  • 1, each of the created object is not returned directly, but interceptorChain. PluginAll (parameterHandler);

  • Get all interceptors (interfaces the plug-in needs to implement); Call interceptor. The plugin (target); Returns the target wrapped object

  • 3. Plugin mechanism, we can use plug-in to create a proxy object for the target object; AOP (aspect oriented) Our plug-in can create proxy objects for each of the four objects, which can intercept each execution of the four objects;

How exactly does a plug-in intercept and attach additional functionality? In ParameterHandler

public ParameterHandler newParameterHandler(MappedStatement mappedStatement,Object object, BoundSql sql, InterceptorChain interceptorChain){

    ParameterHandler parameterHandler = mappedStatement.getLang().createParameterHandler(mappedStatement,object,sql);

    parameterHandler = (ParameterHandler)interceptorChain.pluginAll(parameterHandler);

    return parameterHandler;

}

public Object pluginAll(Object target) {
    for (Interceptor interceptor : interceptors) {
        target = interceptor.plugin(target);
    }
    return target;
}
Copy the code

The interceptorChain holds all interceptors, which are created when mybatis is initialized. Interceptors in the interceptor chain are called to intercept or enhance the target in turn.

The target in interceptor.plugin(target) can be understood as the four objects in Mybatis. If we want to intercept Executor’s Query method, we can define the plug-in as follows:

@Intercepts({ @Signature( type = Executor.class, method = "query", args={MappedStatement.class,Object.class,RowBounds.class,ResultHandler.class} ) })

public class ExeunplePlugin implements Interceptor {
    // omit logic
}
Copy the code

In addition, we need to configure the plug-in into SQLmapconfig.xm L.

<plugins>
    <plugin interceptor="com.lagou.plugin.ExamplePlugin">
    </plugin>
</plugins>
Copy the code

This allows MyBatis to load the plug-in on startup and save the plug-in instance into the relevant object (InterceptorChain). After the preparation, MyBatis is in a ready state. When executing SQL, we need to create SqlSession with DefaultSqlSessionFactory first. An Executor instance will be created during SqlSession creation. After the Executor instance is created, MyBatis will use JDK dynamic proxy to generate proxy classes for the instance. This way, the plug-in logic can be executed before executor-related methods are called.

This is the basic principle of the MyBatis plugin mechanism

Custom plug-in

Mybatis plug-in interface -Interceptor

  • Intercept method, the core method of the plug-in

  • Plugin method to generate a proxy object for Target

  • The setProperties method, which passes the parameters required by the plug-in

@ Intercepts ({@ Signature (type = StatementHandler. Class, / / specification on which class method = "prepare", / / point to intercept the args = which method {Connection. The class Integer. Class}) / / method parameters, because there may be a method overloading})
public class MyPlugin implements Interceptor {

    /* Intercept method: The Intercept method is executed every time the target method of the intercepted target object is executed
    @Override
    public Object intercept(Invocation invocation) throws Throwable {
        System.out.println("Methods have been enhanced....");
        return invocation.proceed(); // The original method is executed
    }

    /* Mainly to store the current interceptor generation proxy in the interceptor chain */
    @Override
    public Object plugin(Object target) {
        Object wrap = Plugin.wrap(target, this);
        return wrap;
    }

    /* Get the parameters of the configuration file */
    @Override
    public void setProperties(Properties properties) {
        System.out.println("The obtained configuration file parameters are:"+properties); }}Copy the code

The sqlmapconfig.xml configuration plug-in is as follows:

<plugins>
    <plugin interceptor="com.lagou.plugin.MySqlPagingPlugin">
    <! -- Configure parameters -->
    <property name="name" value="Bob"/>
    </plugin>
</plugins>
Copy the code

Plug-in source code analysis

Plugin implements the InvocationHandler interface, so its Invoke method intercepts all method calls. The Invoke method checks the intercepted method to determine whether the plug-in logic is executed. The logic of this method is as follows:

// -Plugin
public Object invoke(Object proxy, Method method, Object[] args) throwsThrowable {

    try {
    Signaturemap.get (executor.class), which may return [query, update, commit] */

    Set<Method> methods = signatureMap.get(method.getDeclaringClass());
    // Check whether the method list contains blocked methods
    if(methods ! =null && methods.contains(method)) {
      return interceptor.intercept(new Invocation(target, method, args));
    }
    return method.invoke(target, args);
    } catch(Exception e){
        throwExceptionUtil.unwrapThrowable(e); }}Copy the code

The Invoke method has relatively little code and the logic is not hard to understand. First, the Invoke method checks if the intercepted method is configured in the plug-in’s @Signature annotation, and if so, the plug-in logic is executed, otherwise the intercepted method is executed. The plug-in logic is encapsulated in the Intercept, which has an Invocationo Invocation and is used to store the target class, method, and method argument list. Let’s take a quick look at the definition of this class

public class Invocation {

  private final Object target;
  private final Method method;
  private final Object[] args;

  public Invocation(Object target, Method method, Object[] args) {
    this.target = target;
    this.method = method;
    this.args = args;
  }

  public Object getTarget(a) {
    return target;
  }

  public Method getMethod(a) {
    return method;
  }

  public Object[] getArgs() {
    return args;
  }

  public Object proceed(a) throws InvocationTargetException, IllegalAccessException {
    returnmethod.invoke(target, args); }}Copy the code

The analysis ends with the execution logic of the plug-in

PageHelper paging plug-in

MyBati s can use third-party plug-ins to expand the function, PageHelper PageHelper is to encapsulate the complex operation of paging, using a simple way to get the relevant data of paging

Development steps:

  • Import the coordinates of common PageHelper

  • Configure PageHelper plugin in mybatis core configuration file

  • ③ Test paging data acquisition

① Import common PageHelper coordinates

<! -- Paging Assistant -->
<dependency>
    <groupId>com.github.pagehelper</groupId>
    <artifactId>pagehelper</artifactId>
    <version>3.7.5</version>
</dependency>
<dependency>
    <groupId>com.github.jsqlparser</groupId>
    <artifactId>jsqlparser</artifactId>
    <version>0.9.1</version>
</dependency>
Copy the code

Configure PageHelper plugin in mybatis core configuration file

<! -- Note: The paging assistant plugin is configured before the mapper plugin.*
<plugin interceptor="com.github.pagehelper.PageHelper"><! -- Specify dialect -- ><property name="dialect" value="mysql"/>
</plugin>
Copy the code

③ Test the implementation of paging code

@Test
public void pageHelperTest(a){

    PageHelper.startPage(1.1);
    List<User> users = userMapper.selectUser();
    for (User user : users) {
        System.out.println(user);
    }

    PageInfo<User> pageInfo = new PageInfo<>(users);
    System.out.println("Total number of items:"+pageInfo.getTotal());
    System.out.println("Total pages:"+pageInfo.getPages());
    System.out.println("Current page:"+pageInfo.getPageNum());
    System.out.println("Number of items per page:"+pageInfo.getPageSize());
}
Copy the code

General mapper

What is a universal Mapper

General Mapper is to solve the single table to add, delete, change and check, based on Mybatis plug-in mechanism. Developers do not need to write SQL, do not need to add methods in the DAO, as long as the entity class is written, can support the corresponding add, delete, change and query methods.

How to use universal Mappe?

  1. First, in the Maven project, mapper’s dependencies are introduced in pom.xml
<dependency>
    <groupId>tk.mybatis</groupId>
    <artifactId>mapper</artifactId>
    <version>3.1.2</version>
</dependency>
Copy the code
  1. Complete the configuration in Mybatis configuration file
<plugins>
<! -- <plugin interceptor="com.lagou.plugin.MyPlugin"> <property name="name" value="tom"/> </plugin>-->
        <! -- Pagination plugin: if there is a pagination plugin, it should be placed before the general mapper.
       <plugin interceptor="com.github.pagehelper.PageHelper">
           <property name="dialect" value="mysql"/>
       </plugin>
       
       <plugin interceptor="tk.mybatis.mapper.mapperhelper.MapperInterceptor">
           <! -- Specify which generic mapper interface is currently used -->
           <property name="mappers" value="tk.mybatis.mapper.common.Mapper"/>
       </plugin>
   </plugins>
Copy the code
  1. The entity class sets the primary key
@Table(name = "t_user")
public class User {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)   
    private Integer id;
    
    private String username;
}
Copy the code
  1. Define a generic Mapper
public interface UserMapper  extends Mapper<User> {}Copy the code
  1. test
@Test
public void mapperTest(a) throws IOException {
    InputStream resourceAsStream = Resources.getResourceAsStream("sqlMapConfig.xml");
    SqlSessionFactory sqlSessionFactory = new SqlSessionFactoryBuilder().build(resourceAsStream);
    SqlSession sqlSession = sqlSessionFactory.openSession();
    UserMapper mapper = sqlSession.getMapper(UserMapper.class);
    User user = new User();
    user.setId(1);
    User user1 = mapper.selectOne(user);
    System.out.println(user1);

    / / 2. The method of example
    Example example = new Example(User.class);
    example.createCriteria().andEqualTo("id".1);
    List<User> users = mapper.selectByExample(example);
    for(User user2 : users) { System.out.println(user2); }}Copy the code