Distributed unique ids
The globally unique ID of distributed system is a scenario that all systems will encounter. It is often used in search, storage, as a unique identification or sorting, such as globally unique order number, coupon code, etc. If two identical order numbers appear, it will undoubtedly be a huge bug for users.
In a singleton system, there is no challenge in generating a unique ID, since there is only one application for one machine, and it can be augmented directly with a singleton plus an atomic operation. But in distributed system, different application, different machine room, different machine, want to generate ID is unique, really need to work hard.
In a word:
Distributed unique ids are used to uniquely identify data.
Characteristics of distributed unique ids
The core of distributed unique ID is uniqueness, and other attributes are additional attributes. Generally speaking, an excellent global unique ID scheme has the following characteristics for reference only:
- Globally unique: cannot be repeated, core features!
- Roughly ordered or monotonically increasing: The increment feature facilitates searching, sorting, or range queries, etc
- High performance: The ID generation response is fast and the delay is low
- High availability: In the event of a single-node failure, the entire company relies on services with unique ids globally and all services are unavailable. Therefore, the services that generate ids must be highly available
- Easy to use: user-friendly to access, can be encapsulated to the best out of the box
- Information security: Some scenarios, if continuous, are easy to guess, and attacks are possible. It’s a trade-off.
Distributed unique ID generation scheme
UUID is generated directly
Those of you who have written Java know that logging sometimes uses a class UUID, which generates a random ID as the unique identifier for the current user request, using the following code:
String uuid = UUID.randomUUID();
Copy the code
Universally Unique Identifiers (Uids) are Universally Unique identifiers, which are essentially 128-bit binary integers, Usually we represent it as a string of 32 hexadecimal numbers, almost never repeated, 2 to the 128, that’s an incredibly large number.
The following is the explanation of Baidu Baike:
The UUID consists of the following parts:
(1) The first part of a UUID is time-dependent. If you generate a UUID a few seconds later, the first part is different and the rest are the same.
(2) Clock sequence.
(3) Globally unique IEEE machine id. If there is a nic, it can be obtained from the MAC address of the NIC. If there is no NIC, it can be obtained by other means.
The only drawback to UUID is that the resulting string can be long. The most commonly used UUID standard is Microsoft’s Guids (Globals Unique Identifiers). In ColdFusion, you can easily generate uuids using the CreateUUID() function in the format: Xxxxxxxx-xxxx-xxxxxxxxxxxxxxxx (8-4-4-16), where each X is a hexadecimal number in the range 0-9 or A-F. The standard UUID format is xxxxXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX (8-4-4-4-12). CreateGUID() UDF can be downloaded from CFLIB for conversion. [2]
(4) In Hibernate (Java ORM framework), use IP-JVM startup time – current time right shift 32 bits – current time – internal count (8-8-4-8-4) to compose UUID
In order to repeat, two identical virtual machines, boot time is the same, random seed is the same, generate uUID at the same time, there is a very small probability of repetition, so we can think that it will be repeated in theory, but it is impossible to repeat in practice!!
Uuid advantages:
- Good performance and high efficiency
- No network request, directly local generation
- Different machines do the same thing, never repeat
Uuid is so good, is it a silver bullet? Of course, the disadvantages are also prominent:
- There’s no way to keep increasing, there’s no way to sort
- Uuid is too long, takes up too much storage space, especially in the database, is not friendly to index
- Without business attributes, this thing is just a string of numbers, with no meaning or pattern
Of course, some people want to improve this guy, such as unreadable modification, uuid to int64, convert it to long:
byte[] bytes = Guid.NewGuid().ToByteArray();
return BitConverter.ToInt64(bytes, 0);
Copy the code
For example, the modification of disorder, such as NHibernate’s Comb algorithm, preserves the first 20 characters of the UUID and generates the time of the last 12 characters with the GUID. The time is roughly ordered, which is a small improvement.
Comments: UUID does not exist as a database index, as some log, context identification, or quite delicious, but if this thing is used as an order number, it is really a crash
Database increment sequence
Standalone database
The primary key of the database itself has a natural increment property. If we set the ID as the primary key and increment, we can insert a record into the database. We can return the increment ID, such as the following table construction sentence:
CREATE DATABASE `test`;
use test;
CREATE TABLE id_table (
id bigint(20) unsigned NOT NULL auto_increment,
value char(10) NOT NULL default ' '.PRIMARY KEY (id),
) ENGINE=MyISAM;
Copy the code
Insert statement:
insert into id_table(value) VALUES ('v1');
Copy the code
Advantages:
- Single machine, simple, fast
- Natural increment, atomicity
- Numeric ID sorting, searching, paging are all advantageous
The disadvantages are also obvious:
- Stand alone. If you hang up, you’re on the run
- A machine, high concurrency is not possible
Cluster database
If there are more than one master, each machine cannot generate its own ID, which will lead to duplicate ids.
At this point, it is important to set the starting value and step size for each machine. Let’s say three machines V1, V2, V3:
Unified step: 3 V1 Start value: 1 V2 Start value: 2 V3 Start value: 3Copy the code
Generated ID:
V1:1, 4, 7, 10...
V2:2, 5, 8, 11...
V3:3, 6, 9, 12...
Copy the code
The setup command line can be used:
set @@auto_increment_offset = 1; //The starting valueset @@auto_increment_increment = 3; //Step lengthCopy the code
In this way, if there are enough masters, the high performance is guaranteed. Even if some machines break down, the slave can also replace them. Based on the master and slave replication, the pressure on a single machine can be greatly reduced. But there are drawbacks:
- The master/slave replication is delayed. The master is down. After the slave node is switched to the master node, the number may be repeated.
- Once the starting value and step size are set, it will be difficult to adjust if machines need to be added later (horizontal expansion), and many times it may require downtime to update
Batch number section database
The above database access is too frequent, the number of concurrent, many small probability problems may occur, so why not directly take out a section of ID? Put it directly in memory, for use, and then apply for a paragraph. It is also possible to retain the advantages of cluster mode by fetching a range of ids from the database at a time, such as three machines, and issuing them:
Select 1000 at a time, and the step size of each stage is 3000 V1:1-1000,3001-4000, V2:1001-2000,4001-5000 V3:2001-3000,5001-6000Copy the code
Of course, if you don’t have multiple machines, it’s also possible to apply for 10,000 numbers at a time, with optimistic locks, add a version number,
CREATE TABLE id_table (
id int(10) NOT NULL,
max_id bigint(20) NOT NULL COMMENT 'Current maximum ID',
step int(20) NOT NULL COMMENT 'Step size of segment',
version int(20) NOT NULL COMMENT 'Version number',
`create_time` datetime DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP,
`update_time` datetime DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP.PRIMARY KEY (`id`)
)
Copy the code
SQL > alter database update: select * from database; update: select * from database;
update id_table set max_id = #{max_id+step}, version = version + 1 where version = # {version}
Copy the code
Key points:
- Batch fetching reduces database requests
- Optimistic lock to ensure data accuracy
- Obtain data only from the database. Batch obtain data can be performed as an asynchronous scheduled task. If the value is less than a certain threshold, the data is automatically added
Redis on the
Redis has an atomic command incr, atomic increment, Redis fast, based on memory:
127.0.0.1:6379> set id 1
OK
127.0.0.1:6379> incr id
(integer) 2
Copy the code
Of course, if there is a single redis problem, you can also go to the cluster, you can also use the initial value + step size, you can use the INCRBY command, make several machines can basically withstand high concurrency.
Advantages:
- Memory based, fast
- Natural sorting, self-increasing, conducive to sorting search
Disadvantages:
- After the step size is determined, it is also difficult to adjust the adding machine
- Need to pay attention to persistence, availability, etc., increase system complexity
Redis Persistent If RDB, a snapshot for a period of time, then some data may not have time to persist to disk, hang, restart may appear duplicate ID, at the same time if the master/slave delay, the master node hang, master/slave switchover, may also appear duplicate ID. If AOF is used to persist a command once, the speed may be slowed down. If AOF is persisted once a second, the data of a second may be lost at most. Meanwhile, data recovery will be slow.
Zookeeper generates a unique ID
Zookeeper can be used to generate unique ids, but it is not used because of poor performance. Znode has data version, which can generate 32 or 64 bit serial number. This serial number is unique, but if the competition is large, it also needs to add distributed lock, which is not worthwhile and inefficient.
Meituan Leaf
Below are from Meituan official documentation: tech.meituan.com/2019/03/07/…
Leaf was designed with the following requirements:
- Globally unique, the ID will never be duplicated, and the overall ID trend increases.
- Highly available, the service is completely based on distributed architecture, even if MySQL is down, it can tolerate a period of database unavailability.
- High concurrency and low latency. On CentOS 4C8G VMS, remote QPS can be called up to 5W+, and TP99 can be called within 1ms.
- Access is simple and can be directly accessed through the corporate RPC service or HTTP call.
It is very clear in the document that there are two versions:
- V1: pre-distribution provides ID, which is the number section distribution mentioned above, and the table design is similar, which means batch pull ID
The disadvantage of doing this is that it takes a long time to update the number segment, and it is not available if there is downtime or master-slave replication.
Optimization:
-
1. A double Buffer optimization was first done, which is asynchronous update, which means that two Buffer segments are created. When 10% of one Buffer segment is consumed, the next Buffer segment will be allocated, which is in a sense of advance allocation and asynchronous thread update
-
2. In the above scheme, the number segment may be fixed, and the span may be too large or too small, so it can be made dynamic change, and the size of the next number segment can be determined according to the flow and dynamically adjusted
-
V2: leaf-Snowflake, Leaf provides the Implementation of Java version, and makes weak dependence on Zookeeper to generate machine numbers. Even if Zookeeper has problems, the service will not be affected. Leaf caches a workerID file on the native file system after it first retrieves the workerID from Zookeeper. Even if ZooKeeper fails and the machine restarts at the same time, the service can run normally. This reduces the dependency on third-party components and increases SLA to some extent.
Snowflake algorithm
Snowflake is a popular ID generation algorithm for twitter’s internal distributed project. It generates Long ids, which are eight bytes Long and 64 bits Long, from left to right:
- 1 bit: no, the highest bit in binary is negative, but the unique ID to be generated is a positive integer, so the 1 bit is fixed to 0.
- 41: Record the timestamp (milliseconds), The digits can be used (241-1)/(1000 24 ∗ ∗ ∗ ∗ 60 60 365) = 69 (2 ^ {41} – 1)/(1000 * 24 * 60 * 60 * 365) = 69 (241-1)/(1000 24 ∗ ∗ ∗ ∗ 60 60 365) = 69
- 10 bits: Record the ID of the working machine. The ID can be the machine ID, or the ID of the equipment room + the machine ID
- 12 bits: serial number, which is the ID number generated simultaneously in a millisecond on a machine in a machine room
Then each machine according to the above logic to generate ID, will be trend increasing, because the time is increasing, and do not need to do a distributed, much simpler.
It can be seen that Snowflake is strongly dependent on time, because time is theoretically moving forward, so the number of digits in this part is also increasing. But there is a problem, is the time callback, that is, the time suddenly backwards, may be a fault, may be restarted after the time acquisition problem. So how do we solve the time callback problem?
- The first option: when obtaining the time, judge that if the time is less than the last timestamp, then do not allocate, continue to cycle to obtain the time until the time meets the conditions.
- Number two: the above scheme is only suitable for the clock back to dial the smaller, if the interval is too large, blocking the waiting, is certainly not desirable, so either back to dial directly above a certain size error, denial of service, or have a solution is the use of expansion, on expanding a callback after 1 is ok, so ID can still maintain the only.
Java code implementation:
public class SnowFlake {
// Id of the data center
private long datacenterId;
/ / machine ID
private long workerId;
// Same time sequence
private long sequence;
public SnowFlake(long workerId, long datacenterId) {
this(workerId, datacenterId, 0);
}
public SnowFlake(long workerId, long datacenterId, long sequence) {
// Make a valid judgment
if (workerId > maxWorkerId || workerId < 0) {
throw new IllegalArgumentException(String.format("worker Id can't be greater than %d or less than 0", maxWorkerId));
}
if (datacenterId > maxDatacenterId || datacenterId < 0) {
throw new IllegalArgumentException(String.format("datacenter Id can't be greater than %d or less than 0", maxDatacenterId));
}
System.out.printf("worker starting. timestamp left shift %d, datacenter id bits %d, worker id bits %d, sequence bits %d, workerid %d",
timestampLeftShift, datacenterIdBits, workerIdBits, sequenceBits, workerId);
this.workerId = workerId;
this.datacenterId = datacenterId;
this.sequence = sequence;
}
// Start time stamp
private long twepoch = 1420041600000L;
// The machine room ID contains 5 bits. The maximum number is 11111(base 2)--> 31(base 10).
private long datacenterIdBits = 5L;
// The machine ID takes up 5 bits :11111(base 2)--> 31(base 10)
private long workerIdBits = 5L;
// 5 bits can contain a maximum of 31 digits, that is, a maximum of 32 machine ids
private long maxWorkerId = -1L ^ (-1L << workerIdBits);
// 5 bit Contains a maximum of 31 digits, and the equipment room ID is a maximum of 32 digits
private long maxDatacenterId = -1L ^ (-1L << datacenterIdBits);
12bits 111111111111 = 4095
private long sequenceBits = 12L;
// Offset of workerId
private long workerIdShift = sequenceBits;
// The offset of datacenterId
private long datacenterIdShift = sequenceBits + workerIdBits;
// timestampLeft offset
private long timestampLeftShift = sequenceBits + workerIdBits + datacenterIdBits;
// Serial number mask 4095 (0b111111111111= 0xFFF =4095)
// It is used for the and operation of serial numbers. Ensure that the maximum number is between 0 and 4095
private long sequenceMask = -1L ^ (-1L << sequenceBits);
// Last timestamp
private long lastTimestamp = -1L;
// Get the machine ID
public long getWorkerId(a) {
return workerId;
}
// Obtain the equipment room ID
public long getDatacenterId(a) {
return datacenterId;
}
// Get the timestamp of the last fetch
public long getLastTimestamp(a) {
return lastTimestamp;
}
// Get the next random ID
public synchronized long nextId(a) {
// Get the current timestamp in milliseconds
long timestamp = timeGen();
if (timestamp < lastTimestamp) {
System.err.printf("clock is moving backwards. Rejecting requests until %d.", lastTimestamp);
throw new RuntimeException(String.format("Clock moved backwards. Refusing to generate id for %d milliseconds",
lastTimestamp - timestamp));
}
/ / to heavy
if (lastTimestamp == timestamp) {
sequence = (sequence + 1) & sequenceMask;
// Sequence The sequence is greater than 4095
if (sequence == 0) {
// Call the method to the next timestamptimestamp = tilNextMillis(lastTimestamp); }}else {
// If this is the first fetch of the current time, then set to 0
sequence = 0;
}
// Record the last timestamp
lastTimestamp = timestamp;
// Offset calculation
return ((timestamp - twepoch) << timestampLeftShift) |
(datacenterId << datacenterIdShift) |
(workerId << workerIdShift) |
sequence;
}
private long tilNextMillis(long lastTimestamp) {
// Get the latest timestamp
long timestamp = timeGen();
// If the latest timestamp is found to be less than or equal to the timestamp with the serial number exceeding 4095
while (timestamp <= lastTimestamp) {
// If not, continue
timestamp = timeGen();
}
return timestamp;
}
private long timeGen(a) {
return System.currentTimeMillis();
}
public static void main(String[] args) {
SnowFlake worker = new SnowFlake(1.1);
long timer = System.currentTimeMillis();
for (int i = 0; i < 100; i++) { worker.nextId(); } System.out.println(System.currentTimeMillis()); System.out.println(System.currentTimeMillis() - timer); }}Copy the code
Baidu uid – the generator
For the same, Baidu development, based on Snowflake algorithm, different places can be their own definition of each part of the digit, also do a lot of optimization and expansion: github.com/baidu/uid-g…
UidGenerator is a Unique ID generator implemented in Java based on the Snowflake algorithm. The UidGenerator works in application projects as a component and supports user-defined workerId bits and initialization policies. Therefore, it is suitable for scenarios such as automatic restart and drift of instances in docker virtualization environments. In terms of implementation, UidGenerator solves the natural concurrency limitation of sequence by borrowing future time. RingBuffer is used to cache the generated UID, parallel the production and consumption of UID, and complement the CacheLine at the same time, avoiding the hardware-level “pseudo-sharing” problem caused by RingBuffer. The final single-machine QPS can reach 6 million.
Qin Huai huai viewpoints
No matter what kind of UID generator it is, ensuring uniqueness is the core on which other performance or high availability issues can be considered. The overall scheme is divided into two types:
- Centralization: A center for third parties, such as Mysql, Redis, and Zookeeper
- Advantages: The trend increases
- Disadvantages: increased complexity, clustering in general, pre-agreed steps and so on
- Decentralize: Generate snowflake, Uuid directly on local machine
- Advantages: Simple, efficient, no performance bottlenecks
- Disadvantages: Long data, weak autoincrement attribute
There is no perfect solution, only the solution that conforms to the business and the current volume. There is no optimal solution in the technical solution.
[Author profile] : Qin Huai, public number [Qin Huai Grocery store] author, the road of technology is not at that time, mountain high water long, even slow, chi and not stop. Personal Writing Direction: Java source code analysis, JDBC, Mybatis, Spring, Redis, distributed, sword Offer, LeetCode, etc., carefully write each article, do not like the title party, do not like the flowery, mostly write a series of articles, can not guarantee that I write are completely correct, But I guarantee that what I write is done through practice or research. We hope to correct any omissions or mistakes.
Offer all questions in PDF
What did I write about 2020?
Open Source Programming Notes