Ask questions
What’s 0.1 times 0.1?
Isn’t that 0.01? You’re gonna think I’m insulting your intelligence. Sure, this problem let us human calculate, is equal to 0.01, but now is the computer age, we ask the computer to calculate 0.1 times 0.1, the answer is 0.01?
explore
Some of you may already know the answer, but since we’re in development, let’s verify for ourselves what the answer is. Non-developers can follow me to explore the final answer and see how computer processing differs from our human processing.
letNum: Float = 0.1print(num * num) / / 0.010000001Copy the code
Did you expect this print? Let’s try a few more just in case.
letA = Float(1.0) - Float(0.9)print(a) / / 0.100000024letB = float.init (0.9) - float.init (0.8)print(b) / / 0.099999964Copy the code
Yi? Is Float not accurate enough? So let’s try Double
letNum: Double = 0.1print(num * num) / / 0.010000000000000002letA = Double(1.0) - Double(0.9)print(a) / / 0.09999999999999998letB = double-.init (0.9) - double-.init (0.8)print(b) / / 0.09999999999999998Copy the code
As you can see, even though it’s a little bit accurate, it’s not exactly what we want. This result if you use to calculate, that you will die of very miserable, of course, if this is the amount of money in the financial system calculation, that finally lead to the amount of deviation, it is really to take programmers worship heaven.
Let me try out a data type called Decimal, which, of course, is already known to experienced programmers.
letNum: Decimal = 0.1print(num * num) / / 0.01letA = Decimal(1.0) - Decimal(0.9)print(a) / / 0.1letB = decimal.init (0.9) - decimal.init (0.8)print(b) / / 0.1Copy the code
summary
Decimal computes and finally returns to the value we want. It’s up to us to decide when we need to use Decimal and when we don’t. In general, the data returned to us by the back end is given to String, so that if it’s just for display, it’s just for display. When it comes to calculations, it is best to convert Decimal to prevent loss of precision.
The principle of
Now, let’s dig a little deeper, why do computers lose accuracy in decimals?
First of all, we know that computers only recognize zeros and ones, and no matter how complex our high-level language is, they invariably end up with a series of zeros and ones. That’s easy. 0.1 is our decimal number.
Can’t calculate the classmate, can go baidu search how to calculatehttps://jingyan.baidu.com/article/425e69e6e93ca9be15fc1626.html
, you can also go to this website directly search resultshttps://www.rapidtables.com/convert/number/decimal-to-binary.html
Well, we might as well do a manual calculation, and the results we can see are the same as the answers we found.
0.1 * 2 = 0.2 0.2 * 2 = 0.4 0.4 * 2 = 0.8 0.8 * 2 = 1.6 0.6 * 2 = 1.2 0.2 * 2 = 0.4 0.4 * 2 = 0.8 0.8 * 2 = 1.6 0.6 * 2 = 1.2 0.2 * 2 = 0.4 0.4 * 2 = 0.8Copy the code
As we can see, after the loop, we have a value of 0.00011001100… Consistent with the search, the computer can’t keep you going forever, it ends up with a 1, of course it doesn’t end up with a 1, nobody knows, if we look at 0.2 it doesn’t end up with a 1. Of course, it doesn’t matter. The important thing is that we know the principle and avoid the pits.
conclusion
In simple terms, when a decimal decimal is converted to a binary decimal, it can’t be represented by a finite number of binary decimals. Similar to a repeating decimal in the decimal system, such as 1/3=0.333333. So, when it comes to decimals, we use high-precision Decimal.