1. Prerequisites

  • Installing a Hadoop cluster juejin.cn/post/691987…
  • Zookeeper + Hbase installation juejin. Cn/post / 692065…

2. Install the Phoenix

  • Download the installation package: mirror.bit.edu.cn/apache/phoe…
  • Extract:
Tar ZXVF - apache - phoenix - 4.14.3 - HBase - 1.4 - bin. Tar. GzCopy the code
  • rename
Apache - phoenix - mv 4.14.3 - HBase - 1.4 - bin phoenixCopy the code

copy

Cp phoenix - 4.14.3 - HBase - 1.4 - server jar phoenix - core - 4.14.3 HBase - 1.4 - jar/usr/local/bigdata/HBase/lib/SCP Phoenix - 4.14.3 - HBase - 1.4 - server. Jar phoenix - core - 4.14.3 HBase - 1.4 - jar root @ k8s rac-node1: / usr/local/bigdata/HBase/lib/SCP Phoenix - 4.14.3 - HBase - 1.4 - server. Jar phoenix - core - 4.14.3 HBase - 1.4 - jar root @ k8s - 2: / usr/local/bigdata/HBase/lib /Copy the code

replace

  • Configure the hbase configuration filehbase-site.xml, hadoop configuration filecore-site.xml, under phoenix/bin/ to replace the original configuration file of Phoenix.
  • Restart the hbase cluster to make the JAR package of Phoenix take effect.

3. Check whether the installation is successful

In phoenix/bin, enter the command./sqlline.py node1:2181Copy the code

4. Directory structure

5. Pom configuration file

<dependency> <groupId>org.apache.phoenix</groupId> <artifactId>phoenix-core</artifactId> < version > 4.14.3 HBase - 1.4 < / version > < / dependency > < the dependency > < groupId > mysql < / groupId > < artifactId > mysql connector - Java < / artifactId > < version > 8.0.18 < / version > < / dependency > < the dependency > < the groupId > com. Baomidou < / groupId > < artifactId > mybatis - plus - the boot - starter < / artifactId > < version > 3.2.0 < / version > < exclusions >  <exclusion> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus-generator</artifactId> </exclusion> </exclusions> </dependency>Copy the code

6.application-dev.yml

server: port: 8080 spring: datasource: mysql: jdbc-url: jdbc:mysql://localhost:3306/singodox? allowMultiQueries=true&useUnicode=true&characterEncoding=UTF-8&useSSL=false username: root password: root driver-class-name: com.mysql.cj.jdbc.Driver phoenix: jdbc-url: jdbc:phoenix:k8s-master:2181 driver-class-name: Org. Apache. Phoenix. JDBC. PhoenixDriver type: com. Zaxxer. Hikari. HikariDataSource hikari: # pool maintenance, the minimum number of idle connections minimum - idle: Maximum-pool-size: 20 This property controls the default auto-commit behavior of connections returned from the pool. Default: true Auto-commit: true # Maximum idle time allowed idle-timeout: 30000 # This property represents the user-defined name of the connection pool and is primarily displayed in the log and JMX administrative console to identify the pool and pool configuration. Default value: auto-generated pool-name: custom-hikari # This property controls the maximum lifetime of a connection in the pool. A value of 0 indicates an infinite lifetime. By default, 1800000 is 30 minutes. 1800000 # Database connection timeout time, default 30 seconds, that is 30000 connection-timeout: Select 1 from dual connection-test-query: select 1Copy the code

7. Configure multiple data sources

  • mysql
@Configuration
@MapperScan(basePackages = "com.pandanodes.dao.mysql.**",
        sqlSessionTemplateRef = "mysqlSqlSessionTemplate")
public class MysqlDataSourceConfig {

    public static String MYSQL_LOCATION_PATTERN ="classpath:/mapper/mysql/*.xml";

    @Bean
    @Primary
    @ConfigurationProperties(prefix = "spring.datasource.mysql")
    public DataSource mysqlDataSource() {
        return DataSourceBuilder.create().build();
    }

    @Bean
    @Primary
    public SqlSessionFactory mysqlSqlSessionFactory(@Qualifier("mysqlDataSource") DataSource dataSource) throws Exception {
        SqlSessionFactoryBean bean = new SqlSessionFactoryBean();
        bean.setMapperLocations(new PathMatchingResourcePatternResolver()
                .getResources(MYSQL_LOCATION_PATTERN));
        bean.setDataSource(dataSource);
        return bean.getObject();
    }

    @Bean
    @Primary
    public DataSourceTransactionManager mysqlTransactionManager(@Qualifier("mysqlDataSource") DataSource dataSource) {
        return new DataSourceTransactionManager(dataSource);
    }

    @Bean
    @Primary
    public SqlSessionTemplate mysqlSqlSessionTemplate(@Qualifier("mysqlSqlSessionFactory") SqlSessionFactory sqlSessionFactory) throws Exception {
        return new SqlSessionTemplate(sqlSessionFactory);
    }
}


Copy the code
  • phoenix
@Configuration @MapperScan(basePackages = "com.pandanodes.dao.phoenix.**", sqlSessionTemplateRef = "phoenixSqlSessionTemplate") public class PhoenixDataSourceConfig { public static String PHOENIX_LOCATION_PATTERN ="classpath:/mapper/phoenix/*.xml"; @Bean @ConfigurationProperties(prefix = "spring.datasource.phoenix") public DataSource phoenixDataSource() { return DataSourceBuilder.create().build(); } @Bean public SqlSessionFactory phoenixSqlSessionFactory(@Qualifier("phoenixDataSource") DataSource dataSource) throws Exception { SqlSessionFactoryBean bean = new SqlSessionFactoryBean(); bean.setMapperLocations(new PathMatchingResourcePatternResolver() .getResources(PHOENIX_LOCATION_PATTERN)); bean.setDataSource(dataSource); return bean.getObject(); } @Bean public DataSourceTransactionManager phoenixTransactionManager(@Qualifier("phoenixDataSource") DataSource dataSource) { return new DataSourceTransactionManager(dataSource); } @Bean public SqlSessionTemplate phoenixSqlSessionTemplate(@Qualifier("phoenixSqlSessionFactory") SqlSessionFactory sqlSessionFactory) throws Exception { return new SqlSessionTemplate(sqlSessionFactory); }}Copy the code

8. The demo address:Github.com/panda-nodes…

Panda Notebook Email:[email protected]