When I looked at this problem, I remembered that when I was playing World of Warcraft, the health of the 25H difficulty is more than 2.1 billion, so at that time, the copy of the BOSS was designed to turn the stage, the health of warcraft is int, cannot exceed 2^32 size.

It is estimated that blizzard’s designers did not realize that the health of several expansions exceeded the int limit, so everyone guessed that there would be a later attribute compression.

This is an aside to show you that it is possible to reach the upper limit of the Mysql auto-increment ID.

1. A primary key

If the primary key is set, the primary key is usually set to increment.

Int type = 2^32-1, 1 ^31-1, 1 ^31-1,2^ 32-1,2^ 32-1,2^ 32-1,2^ 32-1,2^ 32-1

Try creating a table:

CREATE TABLE `test1` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
	`name` varchar(32) NOT NULL DEFAULT ' '.PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=2147483647 DEFAULT CHARSET=utf8mb4;
Copy the code

And then do the insert

insert into test1(name) values('qq');
Copy the code

So there’s a piece of data in the table that reaches the upper limit of the maximum number of signed bits.

If the insert statement is executed again:

insert into test1(name) values('ww');
Copy the code

1062 – Duplicate Entry ‘2147483647’ for key ‘PRIMARY’, Time: 0.000000s

In other words, if the primary key is set and incremented, the upper limit of incremented primary keys will be reached and duplicate primary keys will be reported.

Mysql primary key bigint = 8 bytes

When designing, consider the upper limit of the value, if the business frequently inserted, 2.1 billion number is actually possible to reach.

2. No primary key

If no primary key is set, InnoDB will automatically create a row_id of 6 bytes. Since row_id is unsigned, the maximum length is 2^48-1.

Also create a table as a test:

CREATE TABLE `test2` (
	`name` varchar(32) NOT NULL DEFAULT ' '
) ENGINE=InnoDB  DEFAULT CHARSET=utf8mb4;
Copy the code

Through the ps – ef | grep mysql to mysql process ID, and then execute the command, by GDB row_id first changed to 1

sudo gdb -p 2584 -ex 'p dict_sys->row_id=1' -batch
Copy the code

Then insert several pieces of data:

insert into test2(name) values('1');
insert into test2(name) values('2');
insert into test2(name) values('3');
Copy the code

Change row_id to 2^48 again, which is 281474976710656

sudo gdb -p 2584 -ex 'p dict_sys->row_id=281474976710656' -batch
Copy the code

Insert data again

insert into test2(name) values('4');
insert into test2(name) values('5');
insert into test2(name) values('6');
Copy the code

And then you look at the data and you find three pieces of data that are 4, 5, 6, and 3.

Since we started with row_id=1, the row_id of 1, 2 and 3 is also 1, 2 and 3.

After the row_id is changed to the upper limit, the row_id starts from 0, so row_id 4, 5, and 6 are 0, 1, 2.

Because 1,2 data already exists, the data will be overwritten.

conclusion

After the autoincrement ID reaches the upper limit and runs out, there are two cases:

  1. If the primary key is set, an error primary key conflict will be reported.
  2. If the primary key is not set, the database will automatically generate a global row_id for us, and the new data will overwrite the old data

Solution:

Table primary keys should be set as much as possible, and primary keys should be bigint as much as possible. It is possible to reach a maximum of 2.1 billion, such as warcraft, where row_id is 281 trillion, but overwrite data is not acceptable.