Students often face such an interview question: is 0.1 + 0.2 === 0.3 in JavaScript? The answer is false. Many students are faced with this problem for the first time. They have no idea what the problem is, and then try 0.3 + 0.4 === 0.7. It turns out to be correct.

Weird question

In fact, this problem applies to all languages based on the IEEE 754 standard, not just JavaScript. IEEE 754 is a floating point calculation standard established in 1985. It defines the arithmetic format, interchange format, rounding rules, operations, and exception handling of floating point numbers. If you use binary to represent 0.1 and 0.2 of floating point numbers, they look like this:

0.1= = =0.00011001100.0.2= = =0.00110011001.Copy the code

Whether 0.1 or 0.2, these two numbers are not integers, we can not express them accurately, we can only try to approximate their true value by infinite repeating decimals, so we can get:

0.3= = =0.010011001100.Copy the code

While 0.1 and 0.2 use single-precision floating-point numbers to represent actual values of

0.1= = =0.100000001490116119384765625
0.2= = =0.200000002980232238769531257
Copy the code

so

0.1 + 0.2about0.300000004
Copy the code

How to solve it?

If we need to compare 0.1 + 0.2 with 0.3, what is the correct position? In fact, the correct posture is the minimum precision provided by JavaScript:

Math.abs(0.1 + 0.2 - 0.3) < =Number.EPSILON // true
Copy the code

The correct way to compare floating-point numbers is to check that the absolute value of the difference between the left and right sides of the equation is less than the minimum precision.