The reason lies in the DOUBLE precision standard of IEEE 754 adopted in JS. When encoding data stored inside the computer, 0.1 is not accurate 0.1 at all, but a 0.1 with rounding error. By the time the code is compiled or interpreted, 0.1 has been rounded to an internal computer number so close that a small rounding error has occurred before the calculation has even begun. This is why 0.1 + 0.2 does not equal 0.3.

Also note that not all floating point numbers have rounding errors. Binary can accurately represent decimals with finite number of digits and a multiple of 2 in the denominator, such as 0.5, which has no rounding error inside the computer. So 0.5 + 0.5 === 1

Sometimes when two approximations are computed, the resulting value is within the approximate range of JS, and the correct answer can be obtained. We don’t need to remember which values will get the right answer and which won’t.

How to avoid this problem?

The best way to do this is to get around the accuracy of decimals, and the most common way to do this is to convert floating point numbers to integers. Because integers can be represented exactly.

The usual solution is to raise the number by 10 to the N and divide by 10 to the N. I usually use 1000.

(0.11000 + 0.21000) / 1000 = = 0.3