“This is the 19th day of my participation in the Gwen Challenge in November. See details: The Last Gwen Challenge in 2021”.

If you’ve ever developed an iOS app that involves calculating amounts, you’ve probably experienced a loss of accuracy when using floating-point numbers

Let’s take a look at why it’s missing and how to fix it

Why is the numerical accuracy of floating point numbers lost?

I won’t systematically explain how floating-point types are composed of base mantissa exponents. I’ll just say why: since exponents base 2 in binary notation must be multiples of 2, that is, only 0.5, 0.25, 0.125… By doing so, we can see that no matter how we combine the numbers, we can never get to 0.3, so the computer will give us an approximation of the sum of the numbers that comes closest to 0.3.

Therefore, we can draw the following conclusions about precision loss:

  • In Swift, integers do not lose precision because the span of integers is 1, which can be represented by base 2
  • Due to the way the Swift programming language stores floating-point types, floating-point (Double/Float) accuracy loss is bound to occur

Effects of loss of numerical accuracy

Above we simply explained why the loss of precision, so when does the loss of precision affect us?

Based on my experience, I think the main scenarios are as follows:

  • When a number needs to be displayed in literal form to the outside world
  • When numbers need to be sent to the server for strict comparison (no difference between each bit)

So, accuracy loss is not a terrible thing (at least it’s rare). Let’s take a look at how we can fix it when we do have an accuracy loss problem

How to deal with loss of numerical accuracy

  1. Double is used throughout the calculation and is converted to a string

    Since Swift keeps a lot of decimal places when it loses accuracy (0.3, for example, is stored as 0.299999999999999), the difference between these decimal places and the true value is so small that we can do nothing with them during the process and still leave them as Double. We round it to a string at the last minute to send it to the server or display, and the result is almost flawless.

    But remember not to round in the calculation process, otherwise it is very likely to cause the accumulation of errors, resulting in errors become unacceptable.

  2. Receive and evaluate in Decimal format

    The above approach is simple, requiring only a last-minute string conversion, but it has a drawback: you have to ask the server to convert the original numeric type to a string to receive it, which is not a friendly approach. So is there any way we can get an app to send a JSON packet to the server with an unmissable floating-point number? For example, {“num”: 0.3} instead of {“num”: 0.29999999999999999}

    The answer is yes. Swift provides us with a type for decimal computation: Decimal, which also has +, -, *, and/operators and supports Codable protocols, we can define this type to take server parameter values, calculate and use them in block order. Finally, because it supports Codable protocols, We can put the value directly into the JSON package. There are no special cases where we avoid binary floating-point numbers altogether, so that there is no error

reference

  • Decoding money from JSON in Swift