There are a lot of intersection scenarios, such as how many friends have been read in the article of the official account, or how many friends have been in the group in the member list of the group chat. Recently, I met a similar scene. At the beginning, I thought there was a large amount of online data, and the time complexity of redis intersection operation was O(N). Will real-time calculation be inappropriate? Is offline computing better? I consulted the senior leader in the group, who said that Redis is very fast in the case of small data volume. After listening to the feeling of the need for a practical test, otherwise rushed online heart bottom. The following is the test process, students who need to refer to the next.
Suppose the test objective is to count the number of paying users among online users. The test results are shown below. The conclusion is that the time required depends on the size of the dataset and how much the dataset fits together. The test results are consistent with zinterStore’s time complexity. On the right side of the graph is the result of increasing the degree of contact, with a significant increase in time.
ZINTERSTORE time complexity: O(N*K)+O(M*log(M)) where N represents the number with the least number of members in the ordered set and K represents the number of ordered sets. M represents the number of overlaps in the result set.Copy the code
Test equipment:
Tencent cloud server 16-core 32GB memory
The data set is run through the Redis Lua script
Create_data. Lua script:
-- 1000 users
redis.call("del"."online_user_1000"."rich_user_1000"."online_user_5000"."rich_user_5000"."online_user_10000"."rich_user_10000"."online_user_50000"."rich_user_50000"."online_user_100000"."rich_user_100000")
for i = 1.1000.1 do
redis.call("zadd"."online_user_1000".math.random(1000000) *math.random(100000), math.random(10000000))
redis.call("zadd"."rich_user_1000".math.random(1000000) *math.random(100000), math.random(10000000))
end
-- 5,000 users
for i = 1.5000.1 do
redis.call("zadd"."online_user_5000".math.random(1000000) *math.random(100000), math.random(10000000))
redis.call("zadd"."rich_user_5000".math.random(1000000) *math.random(100000), math.random(10000000))
end
-- 10,000 users
for i = 1.10000.1 do
redis.call("zadd"."online_user_10000".math.random(1000000) *math.random(100000), math.random(10000000))
redis.call("zadd"."rich_user_10000".math.random(1000000) *math.random(100000), math.random(10000000))
end
-- 50,000 users
for i = 1.50000.1 do
redis.call("zadd"."online_user_50000".math.random(1000000) *math.random(100000), math.random(10000000))
redis.call("zadd"."rich_user_50000".math.random(1000000) *math.random(100000), math.random(10000000))
end
-- 100,000 users
for i = 1.100000.1 do
redis.call("zadd"."online_user_100000".math.random(1000000) *math.random(100000), math.random(10000000))
redis.call("zadd"."rich_user_100000".math.random(1000000) *math.random(100000), math.random(10000000))
end
-- 500,000 users
for i = 1.500000.1 do
redis.call("zadd"."online_user_500000".math.random(1000000) *math.random(100000), math.random(10000000))
redis.call("zadd"."rich_user_500000".math.random(1000000) *math.random(100000), math.random(10000000))
end
-- 1,000,000 users
for i = 1.1000000.1 do
redis.call("zadd"."online_user_1000000".math.random(1000000) *math.random(100000), math.random(10000000))
redis.call("zadd"."rich_user_1000000".math.random(1000000) *math.random(100000), math.random(10000000))
end
return "OK"Copy the code
Test tool: Redis-benchmark
The test command is as follows. -n indicates the number of times
redis-benchmark -n 10000 zinterstore inter_user_1000 2 online_user_1000 rich_user_1000
redis-benchmark -n 10000 zinterstore inter_user_5000 2 online_user_5000 rich_user_5000
redis-benchmark -n 10000 zinterstore inter_user_10000 2 online_user_10000 rich_user_10000
redis-benchmark -n 10000 zinterstore inter_user_50000 2 online_user_50000 rich_user_50000
redis-benchmark -n 10000 zinterstore inter_user_100000 2 online_user_100000 rich_user_100000
redis-benchmark -n 10000 zinterstore inter_user_500000 2 online_user_500000 rich_user_500000
redis-benchmark -n 10000 zinterstore inter_user_1000000 2 online_user_1000000 rich_user_1000000Copy the code
Test screenshot:
In my business scenario, the data set is generally less than 10,000, and the upper limit will not exceed 1 million. Combined with the measured results, there should be no problem with real-time calculation. During the test, it was also found that although a single request was not time-consuming, if there were a large number of requests and the corresponding data scale of these requests was large, it would certainly occupy a large amount of processing time of Redis Server, resulting in the delay or timeout of ordinary requests, which also needs to be considered in business.