The distributed ID must meet the following conditions:

  • Globally unique: This is the most basic requirement and the ID must be globally unique.
  • High performance: Low latency. A small ID cannot affect the overall service response speed.
  • High availability: Infinitely close to 100% availability.
  • Good access: Follow the principle of adaptation and keep the system design and implementation as simple as possible.
  • Trend increasing: this depends on specific business scenarios, it is best to trend increasing, generally not strict requirements.

Let me start by thinking about what are the common solutions for distributed ids?

1. Add the ID of the database

This is the most common way to use the auto_increment method of the database. When we need an ID, we insert a record into the table to return the primary key ID. Simple, the code is convenient, but the database itself has a bottleneck, DB single point can not withstand high concurrency scenarios.

In view of the database single point performance problem, can do high availability optimization, design into a master/slave mode cluster, and to multiple master, set the starting number and growth step.

-- MySQL_1
set @@auto_increment_offset = 1;     Starting values -
set @@auto_increment_increment = 2;  - step
-- The self-added ids are 1, 3, 5, 7, and 9......

-- MySQL_2 configuration:
set @@auto_increment_offset = 2;     Starting values -
set @@auto_increment_increment = 2;  - step
-- The self-added ids are 2, 4, 6, 8, and 10....
Copy the code

However, with the continuous growth of the business, when the performance reaches the bottleneck again, it is too troublesome to expand the capacity, and new instances may have to be shut down, which is not conducive to the subsequent expansion.

2, UUID

UUID is short for Universally Unique Identifier, a machine-generated Identifier that is Unique within a certain range (from a specific namespace to global). UUID is a 16-byte 128-bit number, usually represented as a 36-byte string, such as: 4 d2803e0-8 f29-17 g3-9 fe82c4309 b1c – 250.

The ID generation performance is very good, there are almost no performance problems, the code is simple but too long, unreadable, and there is no guarantee of trend increase.

3. Snowflake algorithm

Snowflake algorithm is the ID generation algorithm adopted by The internal distributed project of Twitter Company. After open source, it is widely praised by large domestic manufacturers. Under the influence of this algorithm, various companies have successively developed distributed generators with their own characteristics.

Composition structure: positive digit (1 bit) + timestamp (41 bit) + machine ID (10 bit) + auto-increment (12 bit), a total of 64 bits composed of a long type.

  • The first bit (1 bit) : in Java, the highest bit of long is the sign bit, which represents positive and negative numbers. Positive numbers are 0 and negative numbers are 1
  • Timestamp part (41 bit) : millisecond time. It is not recommended to save the current timestamp, but to use the difference (current timestamp – fixed start timestamp), which can make the generated ID from a smaller value; The 41-bit timestamp will last 69 years, (1L << 41)/(1000L * 60 * 60 * 24 * 365) = 69 years
  • Workmachine ID (10bit) : Also known as workId, this can be configured flexibly, including machine room or machine number combination, and is usually divided into machine ID (5 bits) and data center (5 bits).
  • Serial number part (12bit) : self-value-added 4096 ids can be generated for a node in the same millisecond

Snowflake algorithm does not depend on the database, flexible and convenient, and the performance is better than the database, ACCORDING to the time in the single machine is increasing, but because of the distributed environment, the clock on each machine can not be completely synchronized, maybe sometimes not global increasing situation.

Snowflake algorithm seems to be pretty good, pretty boy decided to try this scheme.

So a set of operation fierce as tiger, write a demo to the leadership.

We just have to keep thinking about the plan

4. Baidu (UID-Generator)

Uid-generator is based on the Snowflake algorithm. Unlike the original Snowflake algorithm, uID-Generator supports custom time stamps, work machine ids, serial numbers, and other bits. In addition, uID-Generator uses a user-defined workId generation strategy, which is assigned by the database at application startup.

No more details, official address: github.com/baidu/uid-g…

That is, it is database dependent and, since it is based on the Snowflake algorithm, unreadable.

5, Meituan (Leaf)

Meituan Leaf is very comprehensive, supporting both the segment and Snowflake modes.

Also not much introduction, official address: github.com/Meituan-Dia…

Segment mode is database-based, while Snowflake mode relies on Zookeeper

6. Smart refrigerator

TinyID is based on the database number segment algorithm and provides HTTP and SDK access.

Full documentation, official address: github.com/didi/tinyid

7. Redis mode

Its principle is to use redis INCR command to achieve the atomicity of ID increment, as we all know, redis performance is very good, and itself is a single thread, no thread safety issues. However, if redis is used as a distributed ID solution, you need to consider the persistence problem. Otherwise, id duplication may occur after redis restarts. Therefore, you are advised to use the RDB + AOF persistence mode.

Based on the analysis, I think the Redis method is very suitable for the current scene. The company system used Redis originally, and it is also the RDB + AOF persistence method, which is very easy to access, only a small amount of coding can achieve a transmitter function.

Don’t say a word, just get to work.

This case is based on Spring Boot 2.5.3

First introduce redis dependencies in the POM

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<! Oracle client connection requires this dependency -->
<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-pool2</artifactId>
</dependency>
Copy the code

Configure redis connections in application.yml

spring:
  redis:
    port: 6379
    host: 127.0. 01.
    timeout: 5000
    lettuce:
      pool:
        Connection pool large number of connections (use negative values to indicate no limit)
        max-active: 8
        # Large idle connections in connection pool
        max-idle: 8
        # Small free connections in connection pool
        min-idle: 0
        Connection pool large block wait time (use negative value to indicate no limit)
        max-wait: 1000
        # turn off the timeout
        shutdown-timeout: 100
Copy the code

Inject the RedisTemplate into the Spring container

@Configuration
public class RedisConfig{

    @Bean
    @ConditionalOnMissingBean
    public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory connectionFactory) {
        RedisTemplate<String, Object> redisTemplate = new RedisTemplate<>();
        redisTemplate.setConnectionFactory(connectionFactory);
        / / use Jackson2JsonRedisSerializer to serialization/deserialization redis value value
        Jackson2JsonRedisSerializer<Object> jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer<Object>(Object.class);
        ObjectMapper objectMapper = new ObjectMapper();
        objectMapper.setVisibility(PropertyAccessor.ALL, com.fasterxml.jackson.annotation.JsonAutoDetect.Visibility.ANY);
        objectMapper.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
        jackson2JsonRedisSerializer.setObjectMapper(objectMapper);
        // value
        redisTemplate.setValueSerializer(jackson2JsonRedisSerializer);
        redisTemplate.setHashValueSerializer(jackson2JsonRedisSerializer);

        // Use StringRedisSerializer to serialize/deserialize the redis keyRedisSerializer<? > redisSerializer =new StringRedisSerializer();
        // key
        redisTemplate.setKeySerializer(redisSerializer);
        redisTemplate.setHashKeySerializer(redisSerializer);

        redisTemplate.afterPropertiesSet();
        returnredisTemplate; }}Copy the code

The Redis auto-increment sequence is implemented using the RedisAtomicLong class in the Redis dependency, which is atomic as the class name suggests.

Take a look at some of the source code for RedisAtomicLong

// RedisAtomicLong partial source code
public class RedisAtomicLong extends Number implements Serializable.BoundKeyOperations<String> {
    private static final long serialVersionUID = 1L;
    // the key in redis is volatile
    private volatile String key;
    // The current key-value object, which gets the value based on the passed key
    private ValueOperations<String, Long> operations;
    // Pass the current redisTemplate object as the top-level interface of the redisTemplate object
    private RedisOperations<String, Long> generalOps;

    public RedisAtomicLong(String redisCounter, RedisConnectionFactory factory) {
        this(redisCounter, (RedisConnectionFactory)factory, (Long)null);
    }
    private RedisAtomicLong(String redisCounter, RedisConnectionFactory factory, Long initialValue) {
        Assert.hasText(redisCounter, "a valid counter name is required");
        Assert.notNull(factory, "a valid factory is required");
        // Initialize a RedisTemplate object
        RedisTemplate<String, Long> redisTemplate = new RedisTemplate();
        redisTemplate.setKeySerializer(new StringRedisSerializer());
        redisTemplate.setValueSerializer(new GenericToStringSerializer(Long.class));
        redisTemplate.setExposeConnection(true);
        // Set the current Redis connection factory
        redisTemplate.setConnectionFactory(factory);
        redisTemplate.afterPropertiesSet();
        // Set the key passed in
        this.key = redisCounter;
        // Set the current redisTemplate
        this.generalOps = redisTemplate;
        // Get the current key-value set
        this.operations = this.generalOps.opsForValue();
        // Set the default value. If null is passed in, key gets the value in Operations. If value is null, set the default value to 0
        if (initialValue == null) {
            if (this.operations.get(redisCounter) == null) {
                this.set(0L);
            }
        // If not null, set to the value passed in
        } else {
            this.set(initialValue); }}// Pass the value of the key + 1 and return it
    public long incrementAndGet(a) {
        return this.operations.increment(this.key, 1L); }}Copy the code

After looking at the source code, we continue to code ourselves

Use RedisAtomicLong to encapsulate a basic Redis increment sequence utility class

// Encapsulates only part of the method and can be extended
@Service
public class RedisService {
    @Autowired
    RedisTemplate<String, Object> redisTemplate;

    /** * get the link factory */
    public RedisConnectionFactory getConnectionFactory(a) {
        return redisTemplate.getConnectionFactory();
    }

    /** * autoincrement *@param key
     * @return* /
    public long increment(String key) {
        RedisAtomicLong redisAtomicLong = new RedisAtomicLong(key, getConnectionFactory());
        return redisAtomicLong.incrementAndGet();
    }

    /** * Autoincrement (with expiration time) *@param key
     * @param time
     * @param timeUnit
     * @return* /
    public long increment(String key, long time, TimeUnit timeUnit) {
        RedisAtomicLong redisAtomicLong = new RedisAtomicLong(key, getConnectionFactory());
        redisAtomicLong.expire(time, timeUnit);
        return redisAtomicLong.incrementAndGet();
    }

    /** * Autoincrement (with expiration time) *@param key
     * @param expireAt
     * @return* /
    public long increment(String key, Instant expireAt) {
        RedisAtomicLong redisAtomicLong = new RedisAtomicLong(key, getConnectionFactory());
        redisAtomicLong.expireAt(expireAt);
        return redisAtomicLong.incrementAndGet();
    }

    /** * Auto-increment (with expiration time and step size) *@param key
     * @param increment
     * @param time
     * @param timeUnit
     * @return* /
    public long increment(String key, int increment, long time, TimeUnit timeUnit) {
        RedisAtomicLong redisAtomicLong = new RedisAtomicLong(key, getConnectionFactory());
        redisAtomicLong.expire(time, timeUnit);
        returnredisAtomicLong.incrementAndGet(); }}Copy the code

Write transmitter methods based on business requirements

@Service
public class IdGeneratorService {
    @Autowired
    RedisService redisService;

    /** * Generate ID (daily reset increment sequence) * format: date + 6-bit increment * e.g. 20210804000001 *@param key
     * @param length
     * @return* /
    public String generateId(String key, Integer length) {
        long num = redisService.increment(key, getEndTime());
        String id = LocalDate.now().format(DateTimeFormatter.ofPattern("yyyyMMdd")) + String.format("% 0" + length + "d", num);
        return id;
    }

    /** * Gets the end time of the day */
    public Instant getEndTime(a) {
        LocalDateTime endTime = LocalDateTime.of(LocalDate.now(), LocalTime.MAX);
        return endTime.toInstant(ZoneOffset.ofHours(8)); }}Copy the code

Due to business requirements, the increment sequence needs to be reset every day, so the expiration time is set at the end of each day, so that the next day will start at 1 again.

Test the

@SpringBootTest
class IdGeneratorServiceTest {
    @Test
    void generateIdTest(a) {
        String code = idGeneratorService.generateId("orderId".6); System.out.println(code); }}// Output: 20210804000001
Copy the code

Six-bit increment sequences can generate nearly 100W codes per day, which is sufficient for most companies.

After the local environment test, 10 threads were opened, 10000 requests per thread in 1 second, no pressure.

If you think sequential numbers in some scenarios might give away your company’s data, such as the size of your order, you can set a random growth step so that you don’t see the size of your order. However, the number of codes generated will be affected. You can adjust the number of digits of the auto-increment sequence according to the actual situation.

conclusion

There is no best, only the most suitable. This is often the case in practical work, and the most appropriate scheme needs to be selected according to the actual business requirements.

END

Phase to recommend

Some knowledge of design patterns in factory patterns

Wu yi 凣 event tells us, do not understand the man-in-the-middle attack will suffer a great loss

Is this it? Spring transaction failure scenarios and solutions

Is this it? An article will help you read Spring transactions

SpringBoot+Redis implements message subscription publishing