A few questions

Let’s start with a couple of questions

  1. Why 0.1+0.2! = 0.3?

  2. Why 1.005.toFixed(2)=1.00 instead of 1.01
  3. Why do two constants, number. MAX_VALUE and number. MAX_SAFE_INTEGER exist together?

Next, take these three questions as the purpose to sort out the context.

Double precision storage

Before we get started, we need to understand how the JavaScript number type is stored on the computer, which is the basis of all the problems. JavaScript numbers are all of type number. Both integers and floating-point numbers are stored on the computer in IEEE754 double. What is double? The storage format is as follows:





1 sign bit +11 exponent bit +52 mantissa bit respectively

For example, if the number is 5.5, the calculation would look like this:


5.5 to binary =====> 101.1 scientific notation =====> 1.011*2^2


Deposit in computer:


Sign bit: 0


Exponent bits: 2 + 1023 =====> 1025 to binary =====> 10000000001


Mantissa: 1.011 To hide the 1 to the left of the decimal point =====> 011

Save it into the computer, as shown in the picture below. The screenshot comes from IEEE754 visualization, and you can play with it if you are interested



Let’s move on to the first question

Why 0.1+0.2! = 0.3?

You can test the results on the browser console:



The following talk about why this happens

0.1 to binary =====> 0.0001100110011001100… Circulate (1100)


Scientific counting method =====> 1.100110011… (1100 cycle) *2^-4


The data loop is infinite, but the number of mantissa bits available is limited. Only 52 bits are available, so the 53rd bit is dropped and carried

The final storage in the computer is as follows:



A similar 0.2 is stored in a computer as shown below:



So the final calculation is:

0.00011001100110011001100110011001100110011001100110011010 + 0.0011001100110011001100110011001100110011001100110011010 = 0.0100110011001100110011001100110011001100110011001100111

The result is 0.30000000000000004 in decimal form

So, 0.1 and 0.2 lose some precision when stored in computer binary memory, and they are not true 0.1 and 0.2 when converted to decimal memory in computer binary memory, so they are not 0.3 when added together.


The question is, since 0.1 is stored in the computer with a rounding error, why does num=0.1 get 0.1?

It can be used on the console
toPrecisionTake a look at the return of 0.1 with varying precision




In fact, 0.1 is the result of truncating part of the precision, so the question can be translated into: what rules are used to truncate double precision floating point numbers?

You can find the following passage in wiki:



If an IEEE 754 double-precision floating-point number is converted to a decimal digit string containing at least 17 significant digits, the string must be converted back to the original double-precision floating-point number. In other words, if a double-precision floating-point number is converted to a decimal number, it takes the shortest precision as long as the number it converts back to remains the same.

For example, 0.1 is stored the same as 0.10000000000000001 converted to a double precision floating-point number, so take the shortest 0.1.

Here’s question number two

Why 1.005.toFixed(2)=1.00 instead of 1.01

As mentioned in the first question, converting a decimal number to a double-precision floating-point number and then retrieving it from it can cause errors. Try 1.005 for 20 precision numbers:




It is clear that 1.005 is just a truncated number, and its double-precision floating-point number represents a 20-bit number of 1.0049999999999998934, so when rounding the remaining two bits, all digits are dropped.

Why do two constants, number. MAX_VALUE and number. MAX_SAFE_INTEGER exist together?

Take a look at the console:




Why is the maximum safe integer 2^53-1? JavaScript floating point numbers are stored as 52-mantissa digits, but because the scientific notation 1 to the left of the decimal point is omitted, the 52-mantissa digits + the omitted 1 digit =53 representable digits.

When all 53 bits of binary are ones, the decimal is: 2^53-1=9007199254740991



So why is 2 to the 53-1 the largest safe integer? How about bigger than that?

Try it in a browser:




Use 2^53 to explain why 2^53-1 is the safest integer. What’s safe

2 turn ^ 53 binary = = = = = > 100000000000000000000000000000000000000000000000000000 (53)


To scientific notation = = = = = > 1.00000000000000000000000000000000000000000000000000000 * 2 ^ 53 (53 0)


Save it to a computer =====> there are only 52 mantras so only 52 zeros can be saved by cutting off the trailing zeros


2 ^ 53 + 1 binary = = = = = > 100000000000000000000000000000000000000000000000000001 (52 0)


To scientific notation = = = = = > 1.00000000000000000000000000000000000000000000000000001 (52 0)


The computer =====> only has 52 mantras, so cutting off the trailing 1 only saves 52 zeros

As you can see, 2^53 and 2^53+1 are stored in the computer in the same mantissa and exponent, so two different numbers are stored in the computer in the same way, which is very insecure.

So 2^53-1 is the largest safe integer in JavaScript. For number. MAX_VALUE, set the mantissa and exponent bits to 1 before converting to decimal.

How to solve

The above mentioned floating point operations, rounding, large number operations, can be used to solve the bignumber library, open the link to write demo at the console.

Afterword.

Many difficult bugs in services are often caused by insufficient understanding of basic knowledge. By understanding these principles, you can clearly know what you are writing and quickly debug them, so as not to fall into the CV (Control +C, Control +V) engineer cycle.

———————————

Welcome to pay attention to my public number, the front end of the ancient beast, do the front end, more than do the front end.