This article was originally published in: Walker AI

It took too long to calculate the turn-to-battle data index, and even could not meet the requirements due to the lack of memory on a single machine. Therefore, it was considered to change the original single-node ClickHouse into a cluster and use distributed tables for related calculations.

1. Environment construction

1.1 Single-node Solution

ClickHouse instance CPU memory disk
ClickHouse 16C 64G 4 t, high IO

1.2 Cluster Solution 3 Fragment 1 Copy

ClickHouse instance CPU memory disk
ClickHouse1 16C 64G 4 t, high IO
ClickHouse2 8C 32G 1T, super high IO
ClickHouse3 8C 32G 1T, super high IO

2. Scheme comparison

2.1 Comparison of write speeds

Data volume: 26910101 Rows

Solution 1: Single-machine solution (Insert full data into single-machine table)

ClickHouse instance Write speed
ClickHouse 26.009 SEC. 1.03 million rows/s, 504.78 MB/s

Solution 2: Cluster solution (Data is written to the physical tables of three machines in parallel)

ClickHouse instance Write speed
ClickHouse1 13.640 SEC. 1.97 million rows/s, 320.96 MB/s
ClickHouse2 9.166 SEC. 2.94 million rows/s, 982.03 MB/s
ClickHouse3 9.632 SEC. 2.79 million rows/s, 931.00 MB/s

Conclusion: The data writing speed is related to disk IO, and the cluster scheme has significant advantages over the stand-alone scheme.

2.2 Query Comparison (Group Query and Associated Query)

(1) Distributed table building method

- physical table

CREATE table rd_physical.rd_baseinfo_physical on cluster cluster_3shards_1replicas
(
`appId` String,
`pvpId` String,
`accountId` String,
`userName` String,
`round` Nullable(Int32),
`event` String,
`mode` Nullable(Int32),
`win` Int32,
`country` String,
`timeStamp` String,
`ts` DateTime,
`ds` Date
)
ENGINE = ReplicatedMergeTree('/clickhouse/rd_physical/tables/{shard}/rd_baseinfo_physical'.'{replica}')
PARTITION BY (ds)
ORDER BY (appId, accountId, pvpId)
SETTINGS index_granularity = 8192

- the logical table

CREATE table rd_data.rd_baseinfo on cluster cluster_3shards_1replicas
(
`appId` String,
`pvpId` String,
`accountId` String,
`userName` String,
`round` Nullable(Int32),
`event` String,
`mode` Nullable(Int32),
`win` Int32,
`country` String,
`timeStamp` String,
`ts` DateTime,
`ds` Date
)
ENGINE =Distributed(cluster_3shards_1replicas, rd_physical, rd_baseinfo_physical, cityHash64(accountId))
Copy the code

(2) Group query

The SQL statement: select count(*) , accountId,pvpId from rd.rd_baseinfo where ds>=’2019-12-01′ and ds<‘2020-01-01’ group by accountId ,pvpId ;

Single package Cluster solution
10.177 SEC. 78.81 million rows/s, 2.66 GB/s 6.264 SEC. 104.32 million rows/s, 3.46 GB/s

Conclusion: The efficiency of clustering scheme is about 25% higher than that of single machine.

(3) Associated query

Connection way Single package Cluster solution
5 million join 1 million 0.946 SEC. 0.86 million rows/s, 53.29 MB/s 0.920 SEC. 1.09 million rows/s, 75.70 MB/s
10 million join 1 million 0.880 SEC. 0.94 million rows/s, 58.80 MB/s 0.921 SEC. 1.09 million rows/s, 75.59 MB/s
20 million join 1 million 0.938 SEC. 0.87 million rows/s, 53.96 MB/s 0.923 SEC. 1.08 million rows/s, 75.41 MB/s
50 million join 1 million 0.940 SEC. 0.86 million rows/s, 53.81 MB/s 0.960 SEC. 1.04 million rows/s, 72.53 MB/s
100 million join 1 million 1.906 SEC. 0.90 million rows/s, 56.56 MB/s 1.135 sec; 880.95 thousand rows/s., 61.34 MB/s.
Join 5 million 5.141 SEC. 1.01 million rows/s, 74.07 MB/s 3.791 SEC. 1.32 million rows/s, 91.46 MB/s
10 million join 5 million 5.149 SEC. 1.01 million rows/s, 73.92 MB/s 4.127 SEC. 1.21 million rows/s, 84.00 MB/s
20 million join 5 million 5.172 SEC. 1.00 million rows/s, 73.46 MB/s 4.110 SEC. 1.22 million rows/s, 84.36 MB/s
50 million join 5 million 5.096 SEC. 1.02 million rows/s, 75.00 MB/s 4.342 SEC. 1.15 million rows/s, 79.84 MB/s
100 million join 5 million 6.108 SEC. 1.02 million rows/s, 74.75 MB/s 4.362 SEC. 1.15 million rows/s, 79.49 MB/s
5 million join 10 million 12.341 SEC. 1.16 million rows/s, 85.39 MB/s 7.885 SEC. 1.27 million rows/s, 87.61 MB/s
10 million join 10 million 12.337 SEC. 1.16 million rows/s, 85.44 MB/s 7.916 SEC. 1.26 million rows/s, 87.27 MB/s
20 million join 10 million 12.324 SEC. 1.17 million rows/s, 85.61 MB/s 7.777 SEC. 1.29 million rows/s, 88.84 MB/s
50 million join 10 million 13.039 SEC. 1.14 million rows/s, 87.10 MB/s 8.083 SEC. 1.24 million rows/s, 85.46 MB/s
100 million join 10 million 13.101 SEC. 1.13 million rows/s, 86.43 MB/s 8.578 SEC. 1.17 million rows/s, 80.53 MB/s

Conclusion: In join operation with small amount of data, there is little difference between single machine scheme and cluster scheme. The single-node solution with a large amount of data is inferior to the cluster solution and may have problems such as insufficient memory.

3. Others

ClickHouse has a small number of concurrent Queries. 100 Queries are recommended for official website Queries. Standalone ClickHouse is not suitable for high-concurrency service Queries. ClickHouse clusters can use chProxy to delegate concurrent queries to shards on the cluster, greatly increasing concurrency.

4. Summary of performance test

The data write performance of the single-node scheme is far inferior to that of the cluster scheme.

In terms of query, the single-machine solution with a small amount of data is not significantly different from the cluster solution. In the case of a large amount of data, the cluster solution does not have problems such as insufficient memory and CPU, and the query efficiency is also higher than that of the single-machine solution.

Compared with the single-node solution, the cluster solution is slightly complicated in table construction. The data written to the distributed table cannot be written into each shard physical table in real time. Instead, it is written into the memory first and then synchronized to each shard.

In summary, the current round and roundData data volumes are increasing, so it is feasible to build a cluster distributed data storage solution.


PS: more dry technology, pay attention to the public, | xingzhe_ai 】, and walker to discuss together!