preface

Before the article begins, watch the following “weird” scene.

` a, b = 0.1, 0.2

Print (a + b = = 0.3)

print(a + b)

out:

False

0.30000000000000004 `

0.1 + 0.2 == 0.3 I don’t know how you feel when you see this scene for the first time. Anyway, I have a little doubt about life. Why does this happen?

Floating-point limits

Floating-point numbers are represented in computer hardware as a decimal number with base 2 (binary). Let’s first see how 0.125(10) is represented in decimal and binary.

Both of these decimals represent 0.125(10), the only real difference being that the first is a decimal representation in base 10 and the second in base 2.

Unfortunately, most decimal decimals cannot be expressed exactly as binary decimals, but some floating-point numbers can be expressed with binary precision, provided that the number of digits is finite and the denominator can be expressed as a 2^n decimal. Such as 0.5, 0.125. As a result, in most cases, the decimal floating-point numbers you enter can only be approximately stored on the computer as binary floating-point numbers.

As in 0.1 above, let’s manually evaluate its binary result.

Note: Decimal integer to binary method: mod by 2; Decimal Conversion from decimal to binary: Multiply by 2 and divide

Calculation process:

‘0.1 * 2 = 0.2 # 0

0.2 * 2 = 0.4 # 0

0.4 * 2 = 0.8 # 0

0.8 * 2 = 1.6 # 1

0.6 * 2 = 1.2 # 1

0.2 * 2 = 0.4 # 0

0.4 * 2 = 0.8 # 0

. `

As can be seen from the above results, the binary of 0.1 is:

0.0001100110011001100110011001100110011001100110011...

This is a binary infinite repeating decimal, but computer memory is limited, we can’t store all the decimal places. So what’s the solution?

The answer is to cut it off somewhere at the end and take an approximation, so floating-point numbers can only be approximated as binary decimals in most current programming languages that support processor floating-point arithmetic.

Many people using Python are unaware of this difference because Python only prints decimal approximations of the binary values stored on the computer. But keep in mind that even though the output appears to be the exact value of 0.1, the actual stored value is only the binary value closest to 0.1 that the computer can represent.

The solution

“1. A decimal”

The Decimal module performs decimal math, converting a floating point number to a string for evaluation.

`from decimal import Decimal

A, b = Decimal(‘0.1’), Decimal(‘0.2’)

A + b == Decimal(‘0.3’)

out:True`

“2. Numpy. Float32”

Use the numpy module to save data as floating point 32.

`import numpy as np

Temp = Np.array ([0.1, 0.2, 0.3], dType = NP.float32)

Temp [0] + temp[1] == temp[2] ‘Of course, performance may be reduced with improved accuracy, and in practice small deviations from these approximations may not matter. Just keep an eye on it if it happens!

The result is that floating-point numbers lose their accuracy when converted to binary and then converted back to decimal. I don’t know if you got it?

Ok, my share here is over, like little partners click a like and attention, thank you for your support ~