The phenomenon of

0.1 + 0.2 === 0.3
false
Copy the code

Front knowledge

  • In strongly typed languages, integers and decimals are treated separately, with integers being integers and decimals being floating-point
  • There are currently two ways to display floating point numbers: single precision, which uses 32 bits, and double precision, which uses 64 bits for floating-point operations.
  • This article has briefly covered floating point. If you do not know floating point, you are recommended to < Floating point >

To explore the

For example, 0.1 + 0.2 === 0.3 is false;

To explain this problem, we must first explain the principle of JS small number storage; 2. Things happen for a reason, driven by problems:

  1. Js has no distinction between the number types (integer, decimal), so they are converted to binary after how to store?

Decimals have requirements (multiply by R to take the whole, may appear loop infinite, see the following 0.1), integer easy conversion without requirements (mod R), very simple, to ensure the decimal operation standard, so it must be floating point type storage;

  1. Which one does JS choose?

Unlike many other programming languages, JavaScript does not define different types of numeric data types, but always follows the international IEEE 754 standard for storing numbers as double-precision floating point numbers.

IEEE754 standard double – precision floating – point number

IEEE 754 floating-point numbers consist of three fields, namely sign bit, Exponent bias, and Fraction. In the 64 bits, sign bit accounted for 1, exponent bias for 11 and fraction for 52.

In field

Ok, so let’s start calculating 0.1+0.2 for the analog computer. The key is to transfer the decimal to IEEE 754, and the sum from there is easy

The conversion process is equivalent to putting an elephant in a refrigerator:

  1. Convert 0.1 to a binary representation
  2. The converted binary is represented by scientific notation
  3. Converts the binary represented by scientific notation to the IEEE 754 standard representation
Convert 0.1 to a binary representation

We all know that decimal to binary by multiplying R to the whole method, the operation is as follows (base conversion and the original complement inverse shift code do not understand the students, recommend < easy to understand the original complement inverse shift >)

Results: 0.00011001100110011… (circular 0011)

The converted binary is represented by scientific notation

0.00011… (infinite repetition 0011) by scientific notation is 1.10011001… (1001) repeat * 2-4

Converts the binary represented by scientific notation to the IEEE 754 standard representation

I’m going to normalize, in short, the exponent bias and the fraction,

  • Exponent bias

    The 11-bit binary representation of a double precision floating point number with a fixed offset value (2^(11-1)-1) plus the actual value of the exponent (that is, -4 in 2^-4). Why 11? Because the exponent bias is 11 bits out of 64

    So 0.1 exponent bias is equal to 1023 + (-4) = the 11-bit binary representation of 1019, i.e. 011 1111 1011.Copy the code
  • Fraction (mantissa)

    Fraction takes 52 decimal places, so take 52 decimal places.

    1001... (There are 11 1001 in the middle)... 1010 (note that the last four digits are 1010, not 1001, because the rounding is carried, which is why 0.1 + 0.2 does not equal 0.3).Copy the code

At this point, you are finally ready to convert 0.1 to an IEEE 754 representation

0 011 1111 1011 1001... ( 11 x 1001)... 1010 (sign bit) (exponent bias) (fraction)Copy the code

Alarm alarm error

At this time if the number is converted to a decimal, can find value has become rather than 0.100000000000000005551115123126 0.1.

In the same way, 0.2 and 0.3 will also have errors.

The sum is not equal to nature.

Strange equality

In javascript, there are two constants under Number: MAX_VALUE and MAX_SAFE_INTEGER.

MAX_VALUE represents the maximum value that can be expressed in JavaScript, MAX_SAFE_INTEGER represents the maximum safe integer that can be expressed in JavaScript, and their values are as follows:

MAX_VALUE // 1.7976931348623157e+308 Number.MAX_SAFE_INTEGER // 9007199254740991Copy the code
const a = Number.MAX_SAFE_INTEGER
a + 1 === a + 2 // true
Copy the code

Front knowledge

Number. MAX_SAFE_INTEGER and Number. MAX_VALUE

The above definition is not very vague, the maximum is understandable, what is the maximum security? Js (in fact, as long as the IEEE 754 specification is followed) stores the rules of numbers, problem driven:

How to express the largest number in js?

The smart thing to think about is, from a scientific counting point of view, nature has the largest mantissa, the largest exponent;

  • The mantissa is one of 52 mantissa.
  • The rank code is one of 11 bits

Easy, right? Sorry to dig a hole for you, actually not, this number in JS belongs to NAN, why? Turtle bottoms (IEEE754 prescribes three special situations)

As to why there is this regulation. Thought for a long time did not think clearly, strategic give up, back to make up.

That way, we can get the maximum number

  • The mantissa is one of 52 mantissa
  • The rank code is one of 11 bits except the last one

And what do I get? (2^53-1) * 2^(2046-1023) = (2^53-1) * 2^971

This is both the value of number.max_value

Console printing
(Math. Pow (2, 53) - 1) * Math. Pow (2, 50) // + + 1 (Math. Pow (2, 50) - 1) * Math. Pow (2, 50) * Math. 971) === Number.MAX_VALUE // trueCopy the code
How to express the largest integer in the safe range in js?

The so-called security is greater than the number of integer can not necessarily accurate, says it is easy to understand, only the mantissa overflow, 52, will appear “round”, plus the default is the first 1, which means that the integer as long as within 2 ^ 53-1, is absolutely safe, there will be no accuracy loss;

So why does it show up if it’s bigger than that? Let’s take this number plus 1, which is this weird case of equality

const a = Number.MAX_SAFE_INTEGER
a + 1 === a + 2 // true
Copy the code

Number.MAX_SAFE_INTEGER Indicates yes in raw code

0, 10000110100, 1111111111111111111111111111111111111111111111111111Copy the code

After standardization of IEEE 754

11111111111111111111111111111111111111111111111111111
Copy the code

The source code of number. MAX_SAFE_INTEGER+1 is

100000000000000000000000000000000000000000000000000000
Copy the code

After standardization of IEEE 754

Overflow, 0 10000110101, 0000000000000000000000000000000000000000000000000000 (less than 0.5 r, r represents the hexadecimal, give)Copy the code

The source code of number. MAX_SAFE_INTEGER+1 is

100000000000000000000000000000000000000000000000000001
Copy the code

After standardization of IEEE 754

0 10000110101, 0000000000000000000000000000000000000000000000000000, 1 (overflow, 0.5 r, r into the system, and the last one is 0, do not carry truncation)Copy the code

Notice that we’re saving one bit, and we’re still not rounding to even. And now we have a problem, because this number is the same as that number, so let’s verify that

Console printing
const a = Number.MAX_SAFE_INTEGER
a + 1 === a + 2 // true
Copy the code

To this, we really explain the principle of JS strange equality and inequality phenomenon

Refer to the article

JS- Why does 0.1 + 0.2 not equal 0.3?

Calculation and representation of floating-point order codes

How do MAX_VALUE and MAX_SAFE_INTEGER come from