The problem

First, JS inside 0.1 + 0.2! == 0.3 is common sense, because it is not just JS, Java and other languages that have similar problems with floating point accuracy.

However, due to the special situation of our project, when we need to use more than 16 digits, we found that JS even lost the accuracy of large numbers.

For example, console.log(123456789012345678901) gives 123456789012345680000

This is very incomprehensible. Because it seems more intuitive for a number overflow to be Infinity/NaN/ throw/straight to some maximum fixed value.

why

This is mainly because THE IEEE754 specification used by JS specifies that double has 64 bits, including 1 sign bit, 11 exponent E, and 52 significant digits.

This results in JS numbers that are within [-(math.pow (2,53)-1), math.pow (2,53)-1] being eligible for IEEE754.

So JS(ES6+) also provides the maximum/minimum safe numbers number.max_safe_INTEGER and number.min_safe_INTEGER, as well as the method number.isSafeINTEGER () for comparison.

Attention!

Bit operations | & ~ ^ and shift operation > > < < > > > < < < are based on a 32-bit integer, so it comes in operation, please note that not more than Math. The pow (2, 31) – 1 or 2147483647

True transformation intoInfinityThe number of PI is actually greater than or equal to PI(4) 2102 math.h pow

The solution

First of all, JS standards can not be changed, so now say a common solution

Pure numeric ID /key/ order number, etc

Use string instead, as specified when sending values to the front and back ends

Huge amounts, like Zimbabwean dollars 10 trillion Zimbabwean dollars for 2 us cents

It is recommended to use different base units depending on the region. If the amount is large, the base unit can be 10000/1000, or the number of digits can be split

ES10 introduced BigInt

BigInt is a new built-in object introduced in ES10 that creates large numbers beyond (math.pow (2,53)-1) by adding an n to the number, like 123456789012345678901n