This is the third day of my participation in the August More Text Challenge

In JS, how numbers are stored and why the accuracy of decimals can be inaccurate when adding. Such as:

Cause analysis,

In real life, numbers are performed in base 10 (0,1, 2, 3, 4, 5, 6, 7, 8, 9), but in computers, numbers are stored in base 2 (0,1). So how do we convert base 10 to base 2?

The integerdecimalturn2 base

You’re converting from decimal to binary, you divide the decimal number by 2, you get the remainder and the quotient, and then you continue to divide the quotient by 2, knowing that the quotient is 0, you get all the remainder.

For example, the decimal number 10 converted to binary is 1010

expression shang remainder
10/2 5 0
5/2 2 1
2/2 1 0
1/2 0 1

Then read the remainder backwards. 1010 is the binary number of 10

The decimaldecimal2 base

Multiply the decimal part in base 10 by 2, then determine if the decimal part of the result is 0, end with 0, and continue multiplying by 2 if not

For example, 1010.01 is converted from decimal 10.25 to binary

expression product The decimal part The integer part
0.25 * 2 0.5 . 5 0
0.5 * 2 1.0 0 1

The integer part of the result is then fetched from top to bottom. 01 is 0.25 in base 10

The integer2 baseturndecimal

Converting binary integers to decimal is done by multiplying each number from right to left by 2n2^{n}2n (n >=0 &&n < string length -1), then adding the and of each bit.

For example, convert binary 1000010 to decimal 66

1000 010 = (1 *
2 6 2 ^ 6
) + (0 *
2 5 2 ^ 5
) + (0 *
2 4 2 ^ 4
) + (0 *
2 3 2 ^ 3
) + (0 *
2 2 2 ^ 2
) + (1 *
2 1 2 ^ 1
) + (0 *
2 0 2 ^ 0
) = 64 + 0 + 0 + 0 + 0 + 2 + 0 = 66

The decimal2 baseturndecimal

Binary decimal-to-decimal is the process of multiplying each digit of the decimal point from left to right by 2−n2^{-n}2− N (n >= 1&&n <= string length -1), then adding the and of each digit.

For example, convert 1000 010.1001 to base 10 is

1 * 0.1001 = (2-12 ^ {1} 2-1) + (0 * 2-22 ^ {2} 2-2) + (0 * 2-32 ^ {3} 2-3) + (1 * 2-42 ^ {4} – 2-4) = 0.5 + 0 + 0 + 0.0625 = 0.5625

Back to the problem: 0.1 + 0.2 is a decimal number that needs to be converted to binary in order to perform the operation.

0.1 To binary data is as follows:

expression product The decimal part The integer part
0.1 * 2 0.2 2. 0
0.2 * 2 0.4 4. 0
0.4 * 2 0.8 8. 0
0.8 * 2 1.6 6. 1
0.6 * 2 1.2 2. 1
0.2 * 2 0.4 4. 0
0.4 * 2 0.8 8. 0
0.8 * 2 1.6 6. 1
0.6 * 2 1.2 2. 1
. This is 0011 infinite loop, so the result is zero0.0 0011 0011 0011......

The number of decimal digits converted by 0.1 is infinite, but the number of binary digits stored by our computer is finite, and the length of js data stored is fixed at 64 bits, so it is clear why some decimals will be added to the problem.

The way numbers are stored

Each language has different storage methods, such as integer method, floating point method and so on. But in JS, the way data is stored is floating point.

Floating-point numbers are called floats, fractional single precision and double precision. In JS, use double precision to store floating point numbers. Source IEEE 754.

Floating point numbers store numbers as follows:

The figure above is a 64-bit double precision floating-point number. The highest bit is the sign bit S (sign), the middle 11 bits are the exponent E (exponent), and the remaining 52 bits are the mantissa (significant digit) M (Mantissa).

According to IEEE 754, the binary of any floating point number can be represented by the following formula:


V = ( 1 ) s x 2 E x M V = (-1) ^ s \times 2 ^E \times M
  • S is the sign bit: represents the positive and negative of floating point numbers (0 represents a positive number and 1 represents a negative number);

  • E is the exponent bit: stores the exponent. The exponent is added with a constant (offset), which is used to represent the exponent (converting binary data to decimal). The length is 11 bits, and the value ranges from 0 to 2047. Because exponents in scientific notation can be negative, the convention is to subtract an intermediate number (offset) 1023, [0,1022] for negative and [1024,2047] for positive. ;

  • M is the mantissa digit: indicates the significant digit (mantissa digit). The excess part is automatically entered by 1 and rounded by 0. By default, 1 significant digit is saved.

For example: we have a binary that looks like this, 0 0000 0000 011 110 0000… Then we can do the following operation: (−1)0×1.11 (1.75) ×23−1023(-1) ^ 0 \times 1.11 (1.75) \times 2 ^{3-1023}(−1)0×1.11 (1.75) ×23−1023(0000 0000 011) The decimal value is 3, 1.75 for 1.11) = 1.75×2−10201.75 \times 2 ^{-1020}1.75×2−1020

saidInfinity

Now that we know how numbers are stored, what do we mean by Infinity?

Infinity : 0 1111111111 0000000000.......................................... =InfinityAs long as we have the largest exponent, the next number is1.00000... (this is a52a0) Then the rule isInfinity
Copy the code

said-Infinity

Similarly, we can know the representation of the smallest -infinity number

Infinity : 1 1111111111 0000000000.......................................... = -InfinityAs long as we have the largest exponent, the next number is1.00000... (this is a52a0) if the sign bit before it is negative, it is specified to be -Infinity
Copy the code

saidNaN

The other thing is what does NaN look like?

Infinity : 0 1111111111 101010.......................................... =NaNAs long as our index is the largest (2047(The mantissa (the mantissa (the mantissaNaN
Copy the code

Represents the maximum number

When our exponent is 2046 (1111111110) and all the mantissims are ones then this is the largest number

 1 1111111110 111111.......................................... (52a1)Copy the code

The equivalent of 1.11111… (52 1) ×22046−10231.11111…… Times 2 ^ {2046-1023} \times 2 ^ {2046-1023}1.11111…… (52 1s) x 22046−1023 = number. MAX_VALUE

Represents the minimum number

When our exponent is 0 (0000 0000 000) and then the mantissa is 0000 0000 0000 0000 0000 0000 0000 0000 0001, this is the smallest.

 0 00000000000 000000000000000000000000000000000000000000000000001
Copy the code

Converted to a decimal 0.0000000000000000000000000000000000000000000000000001 very small, so we can let the default value is 2 ^ 2-51 {51} – 2-51

V=(−1)s×2E×MV =(-1) ^ s \times 2 ^E \times MV=(−1)s×2E×M = 20−1023×0.00000000000000…… 1 2 ^ {0-1023} \times 0.00000000000000… (51 zeros) 120−1023 x 0.00000000000000…… (51 zeros) 1 = 2−1023 x 2−512 ^{-1023} \times 2 ^{-51} 2−1023 x 2−51 = 2−10742 {-1074}2−1074 = number.min_value

Indicates the maximum safe integer

So how do you represent the largest safe integer? (A safe integer is a continuous number, meaning that the number exists and is continuous from 1 to it and its next digit.)

If you want a continuous number, you take the mantissa full, all ones, and exponents to integers

1.111… (52 ones here)×2? 1.111… (here is 52 ones) \times 2 ^? 1.111… (52 ones here)×2? I’m going to have to take integers. What? Of course if the question mark is 52 our number is an integer. Remove the decimal point and you get a binary 1111… 111 (a total of 53 1s) = 253−12 ^ {53} -1253 −1 = 9007199254740991 = number. MAX_SAFE_INTEGER

Represents the minimum safe integer

Similarly, the smallest safe integer we just have to make sure that the sign bit is 1

– 1111… 111 (a total of 53 1s) = −253−1-2 ^ {53} -1 −253−1 = -9007199254740991 = number. MIN_SAFE_INTEGER

reference

This article is a result of a lot of tests that I have searched all over the Internet. Thank those predecessors, standing on the shoulders of giants, come on!!

Developers.weixin.qq.com/community/d… Blog.csdn.net/gdhgr/artic… www.jianshu.com/p/c4bf75048…