In a word, the EcmaScrpt specification defines the type of Number in accordance with the 64-bit floating point Number rules in IEEE754-2008.

If you don’t understand this sentence, read this blog carefully!

First, let’s look at how to convert base 10 to base 2.

The algorithm on the digital logic circuit is (0.1)10 = (0.0) 2.

As a joke, digital logic circuit was finally used in my work in my sophomore year.

0.1*2 = 0.2, the integer bit is 0, and the accuracy is only to the tenth place, so 0.0.

For unlimited precision, the converted binary number should be: 0.000110011001100110011 (0011) infinite loop.

If expressed in a strange form, it is (-1)^0*1.100110011(0011)* 2^-4

The above formula can be analogous to the formula of scientific decimal counting.

0.0001234567 = 1.234567 * 10^-4

Why do I write it this way?

What does -1 to the 0 mean?

This is the international standard organization IEEE754 for floating point number representation of a definition.

Format for;

(-1)^S x Mx 2^E

The meaning of each symbol is as follows: S, the sign bit, determines the positive and negative, 0 is positive, 1 is negative. M is the significant number, greater than 1 and less than 2. E is the exponent.

Hence the following form:

(-1)^0*1.100110011(infinite cycle 0011) * 2^-4 S = 0, M = 1.100110011(infinite cycle 0011), E =-4Copy the code

The corresponding 0.2 is:

(-1)^0*1.100110011(infinite cycle 0011) * 2^-3 S = 0, M = 1.100110011(infinite cycle 0011), E =-3Copy the code

So what does this have to do with javascript?

Because in IEEE754, there are also two special definitions.

IEEE 754 states that for 32-bit floating-point numbers, the highest 1 bit is the sign bit S, the next 8 bits are the exponent E, and the remaining 23 bits are the significant digit M.

For 64-bit floating-point numbers, the highest 1 bit is the sign bit S, the next 11 bits are the exponent E, and the remaining 52 bits are the significant digit M.

So again, what does this have to do with our javascript?

The Number type in javascript is strictly defined according to IEEE754 standards. The definition of the Number type in the latest version of ECMA-262 is given below.

6.1.6 The Number Type* The Number Type has exactly 18437736874454810627 (that is,) values representing the double-precision 64-bit format IEEE 754-2008 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic, except that the 9007199254740990 (that is, Distinct “not-a-number” values of the IEEE Standard are represented in ECMAScript as a single special value.

Take a look at wikipedia’s IEEE754 standard:

Thus, for javascript’s Number type, the highest 1 bit is the sign bit S, the next 11 bits are the exponent E, and the remaining 52 bits are the significant digit M.

Take 0.1 for example:

(-1)^0*1.100110011(infinite loop 0011) * 2^-4

S = 0, M = 1.100110011(infinite loop 0011), E =-4

Here the infinite loop is limited to 52 digits at most.

JS 0.1, when evaluated in the engine, will essentially compile to:

1.1001100110011001100110011001100110011001100110011001 * ^ 2-4

0.2 similarly, this will compile to:

1.1001100110011001100110011001100110011001100110011001 * ^ 2-3

Take out the key exponents and significant bits:

0.1001100110011001100110011001100110011001100110011001 (2), 0.1001100110011001100110011001100110011001100110011001 (1) - 3-4Copy the code

Formula (1) is transformed into a pure decimal, and the minimum value of 1001 is squeezed out of the effective range by the high value of 0000 to obtain Formula (3)

Equation (2) is transformed into a pure decimal, and the lowest 001 is squeezed out of the effective range by the highest 000, and equation (4) is obtained

What’s the reason?

The reason is the Number type in JS, which has only 52 significant digits in binary decimals, from 0 to 51 (including boundaries).

In chrome console input (0.1). The toString (‘ 2 ‘) and print the results as follows: “0.0001100110011001100110011001100110011001100110011001101”

No more, no less, exactly 52 decimal places, exactly in line with the specification and our guess.

Back to the classic problem of 0.1+0.2===0.30000000000000004.

In EcmaScript, regardless of the Browser or Nodejs environment, the actual calculation of 0.1+0.2 is as follows:

(3) + 0.0001001100110011001100110011001100110011001100110011 0.0000100110011001100110011001100110011001100110011001 (4) --------------------------------------------------------------------------------------------------- = 0.0100110011001100110011001100110011001100110011001100 (5)Copy the code

The final result is actually the binary form of 0.300000000000000004 (a 17-bit decimal number).

That’s why 0.1+0.2 ===0.300000000000000004.

Although the ideal result we’d like is a return of 0.3, it confirms the fact that reality is often pretty skinny.

Is there a way to return 0.1+02 to 0.3?

Because there are many other cases that can cause precision loss, such as:

0.3/0.1 === 2.99999999999996 and 0.7 * 180==125.99999999998 and so on.

So is there a way around this?

Next blog: How to solve class 0.1 +0.2===0.30000000000000004

Unit of acknowledgement:

Segmentfault.com/a/119000000…

Demon. Tw/copy – paste /…

www.ruanyifeng.com/blog/2010/0…

www.css88.com/archives/73…

www.ecma-international.org/ecma-262/8….

En.wikipedia.org/wiki/Floati…

I am looking forward to communicating with you and making progress together. Welcome to join the technical discussion group I created which is closely related to front-end development:

  • SegmentFault technosphere :ES new specification syntax sugar
  • SegmentFault column: Be a good front-end engineer while you’re still young
  • Zhihu column: Be an excellent front-end engineer while you are still young
  • Github blog: Personal blog 233 while You’re Still Young
  • Front-end development QQ group: 660634678
  • excellent_developers

Strive to be an excellent front-end engineer!