In JS, we sometimes encounter computation, which processes certain business by addition, subtraction, multiplication and division. If nothing is done at this time, the following typical loss of accuracy will occur.

The console. The log (0.1 + 0.2); / / 0.30000000000000004

The following is a brief analysis of the reasons:

1. Number type

Js only has the Number type, which is equivalent to the double type in other strongly typed languages. There is no distinction between float and integer. There are four base representations of the Number type, decimal, binary, octal, and hexadecimal, but only decimal and binary are involved here.

Binary: 0B or 0B (digit 0 and letter B or lowercase letter B), followed by 1 or 0 to indicate the binary number

Decimal: The default value is 0 to 9

The Number type uses IEEE 754 format to represent integer and floating point values.

2, the IEEE 754

A 64-bit binary number represents a number. 64-bit = 1 sign bit + 11 exponent bit + 52 decimal bits

Sign bit: Used to represent the positive and negative of numbers, -1^ sign bit value, 0 is positive and 1 is negative

Exponents: Scientific notation is usually used to indicate the magnitude of a value, but here it is usually used in base 2 scientific notation, indicating how many powers of 2

Decimal place: the number in front of the scientific notation, IEEE745 standard, by default all this value is converted to 1. XXXXX format, the advantage is that you can omit a decimal place, can store more digital content, the disadvantage is the loss of accuracy. It can be roughly interpreted as this picture:

3. Loss of accuracy

The essence of precision loss is the loss in the conversion of a floating point number to the standard binary

Integer to binary easy to understand, here only introduce decimal to binary, the problem is decimal to binary. Due to also need to convert to the exponential form, such as 1/2 = 1 * 2^-1, 1/4 = 1 * 2^-2, so the decimal conversion binary process is to determine whether the decimal is full 1/2, 1/4, 8/1 and so on, into the mathematical formula is multiplied by two round method

Binary of 0.1

0.1 * 2 = 0.2 = = = = = = remove the integer part 0 0.2 * 2 = 0.4 = = = = = = remove the integer part 0 0.4 * 2 = 0.8 = = = = = = remove the integer part 0 0.8 * 2 = 1.6 = = = = = = remove the integer part 1 0.6 * 2 = 1.2 = = = = = = remove the integer part 1 0.2 * 2 = 0.4 = = = = = = remove the integer part 0 0.4 * 2 = 0.8 = = = = = = remove the integer part 0 0.8 * 2 = 1.6 = = = = = = remove the integer part 1 0.6 * 2 = 1.2 = = = = = = remove the integer part 1 then infinite loop 0.2 * 2 = 0.4 = = = = = = remove the integer part 0 0.4 * 2 = 0.8 = = = = = = remove the integer part 0 0.8 * 2 = 1.6 = = = = = = remove the integer part 1 0.6 * 2 = 1.2 = = = = = = remove the integer part 1 so converted into binary is 0.1: 0.0001, 1001, 1001, 1001... (Infinite loop) 0.1 => 0.0001 1001 1001 1001... (Infinite loop)Copy the code

In the same way, the binary of 0.2 is 0.0011 0011 0011 0011… (Infinite loop)

The numbers in the computer are stored in binary. Binary floating-point representation cannot accurately represent simple numbers like 0.1.

To compute 0.1 + 0.2, the computer converts 0.1 and 0.2 to binary, then adds them up, and finally converts the sum to decimal

But there are some floating point numbers that will loop indefinitely when converted to binary. Like 0.1 and 0.2 above.

The mantissa part of the storage structure can only represent 53 bits at most. In order to represent 0.1, it can only be rounded to mimic decimal, but binary only has 0 and 1, so it becomes 0 rounded to 1. Thus, the binary representation of 0.1 and 0.2 on the computer is as follows:

0.1 => 0.0001 1001 1001 1001 1001 1001 1001 1001 1001 1001 1001 1001 1001 1001 1001 1001 1001 1001 101

0.2 => 0.0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 001

It is expressed by standard counting method as follows:

0 x 0.1 = > (1 -) 2 ^ 4 x 2 (1.1001100110011001100110011001100110011001100110011010)

0.2 = > 0 x 2 ^ 3 (1 -) x 2 (1.1001100110011001100110011001100110011001100110011010)

Finally, “0.1 + 0.2” in the computer calculation process is as follows:

Exponential decimal place minus3           0.1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1100 1101 0
-3     +     1.1001 1001 1001 1001 1001 1001 1001 1001 1001 1001 1001 1001 1001 
------------------------------------------------------------------------------------
-3          10.0110 0110 0110 0110 0110 0110 0110 0110 0110 0110 0110 0110 0111 0Write it in standard form, integer partial1: -2           1.0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 10Keep significant numbers, round (10Just in000with100The middle, whose rounding depends on the value of the preceding digit,0That are,1Then enter) : -2           1.0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 0011 0100
Copy the code

After the above calculation process, the result of 0.1 + 0.2 can also be expressed as:

(1) – 2-0 x 2 x 2 = (1.0011001100110011001100110011001100110011001100110100) >. 0.30000000000000004

Convert the binary result to a decimal representation with js:

(-1)0 * 2-2 * (0b10011001100110011001100110011001100110011001100110100 * 2**-52); / / 0.30000000000000004

Hence:

The console. The log (0.1 + 0.2); / / 0.30000000000000004

This is a typical case of precision loss, as shown in the calculation above, 0.1 and 0.2 have a precision loss when converted to binary, and another precision loss for the calculated binary. Therefore, the results obtained are not accurate.

4. Solutions

  • Given that the deviation of each floating-point operation is very small (it is not), you can round the result to a specified precision, such as parseFloat(result.tofixed (12));

  • Convert a floating point number to an integer operation and divide the result. For example, 0.1 + 0.2 can be converted to (1*2)/3.

  • Convert floating point numbers into strings to simulate the actual operation.

The above three code implementation here will not show, you can understand online.

To sum up, it is suggested to use the third scheme. Currently, there are many mature libraries, and we can choose corresponding tools according to our own needs. Moreover, these libraries not only solve the accuracy problem of floating-point numbers, but also support large numbers, and fix the inaccurate results of native toFixed.

Above content has inadequacy place to still ask each big guy to point out.