If you ask a pupil what 0.1+0.2 equals, I am sure he will tell you without hesitation that it equals 0.3. But if you ask a programmer, he will discreetly tell you, wait a minute, and then quickly type a few lines of code on the computer:
public class Demo {
public static void main(String[] args) {
System. The out. Println (0.1 + 0.2);
}
}
What you’re thinking is, “This guy never graduated from elementary school? Do you have to use a computer to solve this problem? “Just as you are about to turn away in disdain, a calculation appears on the computer:
0.30000000000000004
“What the hell? Don’t lie to me. The computer must be broken! “
OK, the computer is not broken, so why is a powerful computer and a brilliant programmer inferior to an elementary school student? Let’s break it down:
First of all, let’s review an important knowledge point of compulsory education: scientific counting method
A number is expressed as the form of a 10 n power multiplication (1 | a | < 10, n is an integer or less)
We all know that data in a computer is represented in binary form, and there are two ways to store data in memory:
Float (single precision) 1 bit sign bit 8 bits index bit 23 bits mantissa bit
1 digit sign digit 11 digit exponent digit 52 digit mantissa digit
First, convert decimal 0.1 to binary, and the decimal to binary is generally rounded by two:
0.1 x 2 = 0.2 The integer part is 0 and the fractional part continues x 2
0.2 x 2 = 0.4 The integer part is 0 and the decimal part continues x 2
0.4 x 2 = 0.8 the integer part is 0 and the decimal part continues x 2
0.8 x 2 = 1.6 The integer part is 1 and the decimal part continues x 2
0.6 x 2 = 1.2 The integer part is 1 and the decimal part continues x 2
0.2 x 2= 0.4 The integer part is 0 and the decimal part continues x 2
.
Double computations yield 64-bit binary decimals:
0.0001100110011001100110011001100110011001100110011001100110011001
Science in binary notation, also with 2 n is a power multiplication in the form of (1 | | or less a < 2, n as an integer), a tail, n is index, namely
1. * 2 ^ 100110011001100110011001100110011001100110011001100110011001-4
JAVA floating point numbers default to double precision and use 52 bits to store the mantissa. Since the binary scientific integer is always a 1, you can directly not store the mantissa and truncate the first 52 bits of the decimal part (0 rounded to 1).
1001100110011001100110011001100110011001100110011010
Since the index bit is 11 bits, the offset value of the index is 2 ^ 10-1 = 1023, and the actual value of the index bit is X-1023 = -4, so x is 1019. When it is converted to binary 1111111011, 0 is added in the high position, and 01111111011 is obtained when the full 11 bits are filled
The positive sign bit is 0, and the double-precision binary representation of 0.1 can be written as:
0. 01111111011 (1) 1001100110011001100110011001100110011001100110011010
Similarly, the binary double precision of 0.2 can be expressed as:
0. 01111111100 (1) 1001100110011001100110011001100110011001100110011010
Adjust the index of 0.1 and 0.2 before add the index of the same (i.e., for 3-4), at the same time the mantissa moves to the right one (0. 11001100110011001100110011001100110011001100110011010), in accordance with the principle of 0 s 1 into will finally give up get a 0
0 (0. 01111111100), 1100110011001100110011001100110011001100110011001101
Now you can add the mantissa:
0.1100110011001100110011001100110011001100110011001101
- 1.1001100110011001100110011001100110011001100110011010
10.0110011001100110011001100110011001100110011001100111
Normalized mantissa calculation results, move one digit to the right (the last digit 1 is omitted), and at the same time, double precision calculation results are obtained by exponent +1
0, 01111111101 (1), 0011001100110011001100110011001100110011001100110011 (1)
Since we have a 1 on the right, we need a +1 budget
0011001100110011001100110011001100110011001100110011
- 0000000000000000000000000000000000000000000000000001
0011001100110011001100110011001100110011001100110100
The final result is:
0, 01111111101, 0011001100110011001100110011001100110011001100110100
Convert to base 10:
0.30000000000000004440892098500626
This is why 0.1+0.2 does not equal 0.3.