See mountains and rivers empty read far, the wind and rain more hurt spring.

ClickHouseAn overview of the

What is theClickHouse?

ClickHouse is a columnary-storage database (DBMS) open-source by Yandex of Russia in 2016 for online analytical processing queries (OLAP) that generate reports of analytical data in real time using SQL queries.

What is column storage?

Take the following table for example

id website wechat
1 https://niocoder.com/ Java dry
2 http://www.merryyou.cn/ javaganhuo

When row storage is used, data on disks is organized as follows:

Row1 Row2
1 https://niocoder.com/ Java dry 2 http://www.merryyou.cn/ javaganhuo

The advantage is that when you want to check all the attributes of a record, you can do a disk lookup plus sequential reads. But when you want to check all the records, you need to keep searching, or full table scanning, traversing a lot of data are not needed.

In the case of column storage, data on disks is organized as follows:

col1 col2 col3
1 2 https://niocoder.com/ http://www.merryyou.cn/ Java dry javaganhuo

At this time to check all the records of the wechat just take out the COL3 column.

Cluster Environment Construction

Before you start installing ClickHouse in detail, set up the environment and download the software package at the end.

Creating a VM

Installing a VMVMWare

Install the Vmware VM

The importCentOS

Import the downloaded CentOS system to VMWare

Note: Windows confirms that all VmWare services have been started.

Confirm that goodVmWareEnsure that an IP address has been configured for the VmNet8 nic.

For more information about VmWare network modes, see VmWare VM network modes

Cluster planning

IP The host name Environment configuration The installation ClickHouse
192.168.10.100 node01 Turn off the firewall,hostMapping, clock synchronization JDK.Zookeeper clickhouse-server 9000 clickhouse-server 9001
192.168.10.110 node02 Turn off the firewall,hostMapping, clock synchronization JDK.Zookeeper clickhouse-server 9000 clickhouse-server 9001
192.168.10.120 node03 Turn off the firewall,hostMapping, clock synchronization JDK.Zookeeper clickhouse-server 9000 clickhouse-server 9001

Configure each host

Change the IP address and HWADDR address

vim /etc/sysconfig/network-scripts/ifcfg-eth0

HWADDR Indicates the IP address

Change the host name (the change takes effect permanently after a restart)

vim /etc/hostname

Example Set the hosts domain name mapping

vim /etc/hosts

Disabling the Firewall

Three machines execute the following command

Systemctl status firewalld.service # Check the firewall status systemctl stop firewalld.service # Disable firewalld.service Disable the firewall permanently

Avoid close login

In order to facilitate file transfer, the three machines are configured with encrypted login.

  • The principle of SSH login without encryption
    1. Configure the public key of node A on node B
    2. Node A requests A login request from node B
    3. Node B uses the public key of node A to encrypt A random text
    4. Node A decrypts it using its private key and sends it back to node B
    5. B verifies that the text is correct
Three machines generate public and private keys

Run the following command on the three machines to generate the public and private keys

ssh-keygen -t rsa

After executing the command, press three Enter

Copy the public key tonode01The machine

The three machines will copy the public key to node01

Three machines execute commands:

ssh-copy-id node01

copynode01Machine authentication to other machines

Copy the public key of the first machine to other machines

Run the following command on the node01 machine

scp /root/.ssh/authorized_keys node02:/root/.ssh

scp /root/.ssh/authorized_keys node03:/root/.ssh

Example Set the clock synchronization service

  • Why time synchronization
    • Because many distributed systems are stateful, such as storing data at time 1 on node A and time 2 on node B, problems arise
## installation
yum install -y ntp

## Start a scheduled task
crontab -e
Copy the code

Then type in the input screen

*/1 * * * * /usr/sbin/ntpdate ntp4.aliyun.com;
Copy the code

The installationJDK

Upload the JDK, unpack it, and configure the environment variables

Installation paths of all software

Mkdir -p /export/ Servers #node01,node02,node03

Directory for storing all software packages

Mkdir -p /export/softwares

Install the rz and sz commands

Yum -y install LRZSZ

Upload the JDK installation package to /export/softwares and decompress it

tar -zxvf jdk-8u141-linux-x64.tar.gz -C .. /servers/

Configuring environment Variables

vim /etc/profile

Export JAVA_HOME = / export/servers/jdk1.8.0 _141 export PATH = : $JAVA_HOME/bin: $PATHCopy the code

Run the following command to distribute the JDK installation package to node02 and node03

SCP -r /export/ Servers /jdk1.8.0_141/ node02:/export/servers/ scp-r /export/servers/jdk1.8.0_141/ node03:/export/servers/ scp /etc/profile node02:/etc/profile scp /etc/profile node03:/etc/profile

Refreshing environment variables

source /etc/profile

The installationZookeeper

Server IP The host name The value of the myid
192.168.10.100 node01 1
192.168.10.110 node02 2
192.168.10.120 node03 3

Upload the ZooKeeper installation package to /export/softwares and decompress it

Tar -zxvf zookeeper-3.4.9.tar.gz. C.. /servers/

Node01 Modify the configuration file as follows

CD/export/servers/zookeeper - 3.4.9 / conf/cp zoo_sample CFG zoo. The CFG#Create a data storage nodeThe mkdir -p/export/servers/zookeeper - 3.4.9 / zkdatas /Copy the code

vim zoo.cfg

DataDir = / export/servers/zookeeper - 3.4.9 / zkdatas#How many snapshots to save
autopurge.snapRetainCount=3
#How many hours should the log be cleared
autopurge.purgeInterval=1
#Address of the server in the cluster
server.1=node01:2888:3888
server.2=node02:2888:3888
server.3=node03:2888:3888
Copy the code

On the Node01 machine

/ export/servers/zookeeper – 3.4.9 / zkdatas/this path to create a file, the file named myid, file content is 1

Echo 1 > / export/servers/zookeeper - 3.4.9 / zkdatas/myid

The installation package is distributed to other machines

Node01 Run the following two commands on the machine

SCP - r/export/servers/zookeeper - 3.4.9 / node02: / export/servers /

SCP - r/export/servers/zookeeper - 3.4.9 / node03: / export/servers /

Change the value of myID to 2 on node02

2 > echo/export/servers/zookeeper - 3.4.9 / zkdatas/myid

Node03 Change myID to 3 on the machine

Echo 3 > / export/servers/zookeeper - 3.4.9 / zkdatas/myid

Start the ZooKeeper service

Node01,node02, and node03 are executed on all machines

/ export/servers/zookeeper 3.4.9 / bin/zkServer. Sh start

Viewing the Startup Status

/ export/servers/zookeeper 3.4.9 / bin/zkServer. Sh status

Preparations before Installation

Check whether the CPU supports SSE4.2

Check whether the CPU supports the SSE4.2 instruction set

Grep sse4_2 - q/proc/cpuinfo && echo "SSE 4.2 supported" | | echo "SSE 4.2 not supported"

Install required dependencies

yum install -y unixODBC libicudata

yum install -y libxml2-devel expat-devel libicu-devel

The installationClickHouse

Stand-alone mode

Upload 4 files tonode01The machine/opt/software/
[root@node01 softwares]# ll total 306776 -rw-r--r--. 1 root root 6384 Nov 2 22:43 RPM -rw-r--r-- 1 root root 69093220 Nov 2 22:48 Clickhouse-common-static-20.8.3.18-1.el7.x86_64. RPM -rw-r--r--. 1 root root 36772044 Nov 2 22:51 RPM -rw-r--r-- 1 root root 14472 Nov 2 22:43 Clickhouse - server - common - 20.8.3.18-1. El7. X86_64. RPMCopy the code
Install these four separatelyrpmThe installation package
[root@node01 softwares]# RPM -ivh clickhouse-common-static-20.8.3.181.el7.x86_64. RPM Preparing... ################################# [100%] Updating / installing... 1: clickhouse - common - static - 20.8.3.1 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # [100%] [root @ node01 softwares] # RPM - the ivh Clickhouse - server - common - 20.8.3.18-1. El7. X86_64. RPM Preparing... ################################# [100%] Updating / installing... 1: clickhouse - server - common - 20.8.3.1 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # [100%] [root @ node01 softwares] # RPM - the ivh Clickhouse - server - 20.8.3.18-1. El7. X86_64. RPM Preparing... ################################# [100%] Updating / installing... 1: clickhouse - server - 20.8.3.18-1. El7 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # [100%] the Create user clickhouse. Clickhouse with Datadir /var/lib/clickhouse [root@node01 softwares]# RPM -ivh clickhouse-client-20.8.3.18-1.el7.x86_64. RPM Preparing... ################################# [100%] Updating / installing... 1: clickhouse - the client - 20.8.3.18-1. El7 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # [100%] the Create user clickhouse. Clickhouse with datadir /var/lib/clickhouseCopy the code
rpmAfter the installation,clickhouse-serverandclickhouse-clientThe configuration directory is as follows
[root@node01 softwares]# ll /etc/clickhouse-server/
total 44
-rw-r--r--. 1 root root 33738 Oct  6 06:05 config.xml
-rw-r--r--. 1 root root  5587 Oct  6 06:05 users.xml
[root@node01 softwares]# ll /etc/clickhouse-client/
total 4
drwxr-xr-x. 2 clickhouse clickhouse    6 Nov 28 22:19 conf.d
-rw-r--r--. 1 clickhouse clickhouse 1568 Oct  6 04:44 config.xml

Copy the code
/etc/clickhouse-server/theconfig.xmlforClickHouseThe core configuration file contains the following contents

      
<yandex>
   <! - log - >
   <logger>
       <level>trace</level>
       <log>/data1/clickhouse/log/server.log</log>
       <errorlog>/data1/clickhouse/log/error.log</errorlog>
       <size>1000M</size>
       <count>10</count>
   </logger>

   <! -- -- -- > port
   <http_port>8123</http_port>
   <tcp_port>9000</tcp_port>
   <interserver_http_port>9009</interserver_http_port>

   <! -- Local domain name -->
   <interserver_http_host>You need a domain name here, if you're going to copy it later</interserver_http_host>

   <! -- Listen on IP -->
   <listen_host>0.0.0.0</listen_host>
   <! -- Max connections -->
   <max_connections>64</max_connections>

   <! --> < span style = "max-width: 100%;
   <keep_alive_timeout>3</keep_alive_timeout>

   <! -- Maximum number of concurrent queries -->
   <max_concurrent_queries>16</max_concurrent_queries>

   <! -- Unit is B -->
   <uncompressed_cache_size>8589934592</uncompressed_cache_size>
   <mark_cache_size>10737418240</mark_cache_size>

   <! -- Storage path -->
   <path>/data1/clickhouse/</path>
   <tmp_path>/data1/clickhouse/tmp/</tmp_path>

   <! -- User configuration -->
   <users_config>users.xml</users_config>
   <default_profile>default</default_profile>

   <log_queries>1</log_queries>

   <default_database>default</default_database>

   <remote_servers incl="clickhouse_remote_servers" />
   <zookeeper incl="zookeeper-servers" optional="true" />
   <macros incl="macros" optional="true" />

   <! --> < span style = "max-width: 100%;
   <builtin_dictionaries_reload_interval>3600</builtin_dictionaries_reload_interval>

   <! Alter table drop table drop table drop
   <max_table_size_to_drop>0</max_table_size_to_drop>

   <include_from>/data1/clickhouse/metrika.xml</include_from>
</yandex>
Copy the code
Start theClickHouse
[root@node01 softwares]# service clickhouse-server start
Start clickhouse-server service: Path to data directory in /etc/clickhouse-server/config.xml: /var/lib/clickhouse/
DONE
[root@node01 softwares]# 
Copy the code

useclientlinkserver

[root@node01 softwares]# clickhouse-client -m Clickhouse Client version 20.8.3.18. Connecting to localhost:9000 as user Connected to ClickHouse server version 20.8.3 revision 54438. Node01 :) show databases; SHOW the DATABASES ┌ ─ name ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ │ _temporary_and_external_tables │ │ default │ │ system │ └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ 3 rows in the set. The Elapsed: 0.007 SEC. Node01:) select 1; SELECT 1 ┌ ─ ─ 1 ┐ │ │ 1 └ ─ ─ ─ ┘ 1 rows in the set. The Elapsed: 0.005 SEC. Node01:)Copy the code

Distributed Cluster Installation

innode02.node03It performs all the previous operations
node01The machine modifies the configuration fileconfig.xml
[root@node01 softwares]# vim /etc/clickhouse-server/config.xml <! Listen_host ::</listen_host> <! -- Same for hosts with disabled ipv6: --> <! - < listen_host > 0.0.0.0 < / listen_host > -- > <! Metrika. XML --> <include_from> /etc/clickhouse/metrika. XML </include_from>Copy the code

Distribute the modified configuration to node02 and Node03 machines

scp config.xml node02:/etc/clickhouse-server/config.xml

scp config.xml node03:/etc/clickhouse-server/config.xml

node01The machine/etc/clickhouse-server/Directory creationmetrika.xmlfile
<yandex>
<! -- Cluster Configuration -->
<clickhouse_remote_servers>
    <! SQL > select * from db;
    <cluster_3shards_1replicas>
        <! -- Data fragment 1 -->
        <shard>
            <replica>
                <host>node01</host>
                <port>9000</port>
            </replica>
        </shard>
        <! -- Data fragment 2 -->
        <shard>
            <replica>
                <host>node02</host>
                <port> 9000</port>
            </replica>
        </shard>
        <! -- Data fragment 3 -->
        <shard>
            <replica>
                <host>node03</host>
                <port>9000</port>
            </replica>
        </shard>
    </cluster_3shards_1replicas>
</clickhouse_remote_servers>
</yandex>
Copy the code

Configuration instructions

  • cluster_3shards_1replicasThe cluster name can be defined at will
  • A total of three shards are set, and each shard has only one copy.

Distribute the metrika.xml configuration file to node02,node03 machines

scp metrika.xml node02:/etc/clickhouse-server/metrika.xml

scp metrika.xml node03:/etc/clickhouse-server/metrika.xml

restartClickHouse-serverOpen theclientCheck the cluster
[root@node01 clickhouse-server]# service clickhouse-server restart Stop clickhouse-server service: DONE Start clickhouse-server service: Path to data directory in /etc/clickhouse-server/config.xml: /var/lib/clickhouse/ DONE [root@node01 clickhouse-server]# clickhouse-client -m Clickhouse Client version 20.8.3.18. Connecting to localhost:9000 as user default. Connected to ClickHouse server version 20.8.3 Revision 54438. Node01 :) select * from system.clusters; SELECT * FROM system.clusters ┌ ─ cluster ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┬ ─ shard_num ─ ┬ ─ shard_weight ─ ┬ ─ replica_num ─ ┬ ─ host_name ─ ┬ ─ host_address ─ ─ ─ ┬ ─ port ─ ┬ ─ is_lo CAL ─┬─user─ ┬─default_database─┬─errors_count─┬─estimated_recovery_time─ ─ cluster_3shards_1replicas │ 1 │ 1 │ 1 │ Node01 │ 192.168.10.100 │ 9000 │ 1 │ default │ 0 │ 0 │ cluster_3SHARDS_1replicas │ 2 │ 1 │ 1 │ node02 │ 192.168.10.110 │ 9000 │ 0 │ default │ 0 │ 0 │ cluster_3SHARDS_1replicas │ 3 │ 1 │ 1 │ node03 │ 192.168.10.120 │ 9000 │ 0 │ default │ 0 │ 0 │ test_cluster_two_shards │ 1 │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ 0 │ 0 │ │ test_cluster_two_shards │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ 0 │ 0 │ Test_cluster_two_shards_localhost │ 1 │ 1 │ localhost │ :1 │ 9000 │ default │ 0 │ 0 │ Test_cluster_two_shards_localhost │ │ │ │ 1 1 2 localhost │ : : 1 9000 │ │ │ default │ │ │ 0 0 │ │ test_shard_localhost │ 1 │ 1 │ default │ 0 │ 0 │ test_shard_localhost_secure │ 1 │ 1 │ localhost │ 1 │ 9000 │ 1 │ default │ 0 │ 0 │ test_shard_localhost_secure │ 1 │ 1 │ localhost │ ::1 │ 942 │ 0 │ default │ 0 │ test_unavailable_shard │ 1 │ 1 │ localhost │ ::1 │ 9000 │ default │ 0 │ 0 │ test_unavailable_shard │ 2 │ 1 │ localhost │ ::1 │ 0 │ default │ 0 │ └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ 11 rows in the set. The Elapsed: 0.008 SEC. │ │ 0Copy the code

Cluster_3shards_1replicas is the cluster name defined by us. There are three shards and each shard has one copy of data. The rest is the default cluster configuration in the configuration file.

Testing a distributed cluster

Create Cluster3s1r_local on node01,node02, and Node03

CREATE TABLE default.cluster3s1r_local
(
	`id` Int32,
    `website` String,
    `wechat` String,
	`FlightDate` Date.Year UInt16
)
ENGINE = MergeTree(FlightDate, (Year, FlightDate), 8192);
Copy the code

Create a distributed table on node01

CREATE TABLE default.cluster3s1r_all AS cluster3s1r_local
ENGINE = Distributed(cluster_3shards_1replicas, default, cluster3s1r_local, rand());
Copy the code

Insert data into the distributed table Cluster3s1r_ALL. Cluster3s1r_all will be randomly inserted into cluster3s1r_Local of the three nodes

Insert data

INSERT INTO default.cluster3s1r_all (id,website,wechat,FlightDate,Year)values(1.'https://niocoder.com/'.'Java dry'.'2020-11-28'.2020);
INSERT INTO default.cluster3s1r_all (id,website,wechat,FlightDate,Year)values(2.'http://www.merryyou.cn/'.'javaganhuo'.'2020-11-28'.2020);
INSERT INTO default.cluster3s1r_all (id,website,wechat,FlightDate,Year)values(3.'http://www.xxxxx.cn/'.'xxxxx'.'2020-11-28'.2020);
Copy the code

Query distributed tables and local tables

node01 :) select * from cluster3s1r_all; Check # amount the distributed table SELECT * FROM cluster3s1r_all ┌ ─ ─ id ┬ ─ website ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┬ ─ wechat ─ ─ ─ ─ ─ ┬ ─ FlightDate ─ ┬ during ─ ─ ┐ │ │ 2 http://www.merryyou.cn/ │ javaganhuo │ 2020-11-28 │ 2020 │ └ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ┘ ┌ ─ ─ id ┬ ─ website ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┬ ─ wechat ─ ┬ ─ FlightDate ─ ┬ during ─ ─ ┐ │ │ │ 3 http://www.xxxxx.cn/ XXXXX │ │ │ 2020 2020-11-28 └ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ┘ ┌ ─ ─ id ┬ ─ website ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┬ ─ wechat ─ ─ ─ ┬ ─ FlightDate ─ ┬ during ─ ─ ┐ │ │ 1 │ https://niocoder.com/ │ Java dry goods │ 2020-11-28 │ 2020 │ └ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ┘ 3 rows in the set. The Elapsed: Node01 :) select * from cluster3s1r_local; # node01 this surface the SELECT * FROM cluster3s1r_local ┌ ─ ─ id ┬ ─ website ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┬ ─ wechat ─ ─ ─ ─ ─ ┬ ─ FlightDate ─ ┬ during ─ ─ ┐ │ │ 2 http://www.merryyou.cn/ │ javaganhuo │ 2020-11-28 │ 2020 │ └ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ┘ 1 rows in the set. The Elapsed: Node02 :) select * from cluster3s1r_local; # Elapsed: SELECT * FROM Cluster3s1r_local Ok. 0 rows in set Elapsed: Node03 :) select * from cluster3s1r_local; This surface the SELECT * FROM # # node03 cluster3s1r_local ┌ ─ ─ id ┬ ─ website ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┬ ─ wechat ─ ┬ ─ FlightDate ─ ┬ during ─ ─ ┐ │ │ 3 http://www.xxxxx.cn/ │ XXXXX │ │ │ 2020 2020-11-28 └ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ┘ ┌ ─ ─ id ┬ ─ website ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┬ ─ wechat ─ ─ ─ ┬ ─ FlightDate ─ ┬ during ─ ─ ┐ │ │ │ 1 https://niocoder.com/ Java dry │ │ │ 2020 2020-11-28 └ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ┘ 2 rows in the set. The Elapsed: 0.006 SEC.Copy the code

download

【 clickHouse 】