See this question, I remember when playing World of Warcraft, 25H difficulty of brain damage roaring health has been more than 2.1 billion, so at that time the replica BOSS are designed to turn stage, return health mode, because warcraft health is int type, can not exceed 2^32 size.

It is estimated that blizzard designers did not expect several expansions to exceed the int limit, so it is assumed that there will be a later attribute compression.

These are all digressions, just to show you that it is possible to reach the upper limit when the amount of data is large. Back to the Mysql increment ID upper limit problem, there are two aspects.

1. Have a primary key If a primary key is set, it is generally set to increment.

We know that the int type in Mysql is 4 bytes, which is [-2^31,2^31-1] if there are signed bits, and 2^32-1 if there are no signed bits.

Create a table:

CREATE TABLE `test1` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
 `name` varchar(32) NOT NULL DEFAULT '',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=2147483647 DEFAULT CHARSET=utf8mb4;
Copy the code

And then insert

insert into test1(name) values(‘qq’); So there’s one item in the table that reaches the maximum number of signed bits.



If the insert statement is executed again:

insert into test1(name) values(‘ww’); 1062 – Duplicate entry ‘2147483647’ for key ‘PRIMARY’, Time: 0.000000s

That is, if a primary key is set and incremented, a duplicate primary key will be reported when the maximum self-incremented primary key is reached.

Mysql primary key bigint (8 bytes)

Design with a clear idea of what the upper limit is, and 2.1 billion is actually possible if business inserts frequently.

InnoDB automatically creates a 6-byte row_id for you if no primary key is set. Since row_id is unsigned, the maximum length is 2^48-1.

Also create a table as a test:

CREATE TABLE `test2` (
 `name` varchar(32) NOT NULL DEFAULT ''
) ENGINE=InnoDB  DEFAULT CHARSET=utf8mb4;
Copy the code

Through the ps – ef | grep mysql to mysql process ID, and then execute the command, by GDB row_id first changed to 1

Sudo GDB -p 2584 -ex ‘p dict_sys->row_id=1’ -batch

insert into test2(name) values('1');
insert into test2(name) values('2');
insert into test2(name) values('3');
Copy the code

Change row_id again to 2^48, which is 281474976710656

Sudo GDB -p 2584 -ex ‘p dict_sys->row_id=281474976710656’ -batch Inserts data again

insert into test2(name) values('4');
insert into test2(name) values('5');
insert into test2(name) values('6');
Copy the code

Then the query will find four pieces of data, 4, 5, 6, and 3.

Since we start with row_id=1, row_id for 1, 2, and 3 is also 1, 2, 3.

After changing row_id to the upper limit, row_id is reset from 0, so row_id of 4, 5, and 6 is 0, 1, 2.

Since 1,2 data already exists, the data will be overwritten.

There are two cases when the self-increment ID reaches the upper limit and runs out:

If a primary key is set, a primary key conflict error is reported. If the primary key is not set, the database will automatically generate a global row_ID for us, and the new data will override the old data solution:

If possible, set the primary key as bigint. The upper limit of 2.1 billion is still possible. For example, in Warcraft, row_id is up to 281 trillion, but the overwrite data is obviously not acceptable.