If you’ve ever developed an iOS app that involves calculating amounts, you’ve probably experienced a loss of accuracy when using floating-point numbers

Let’s take a look at why it’s missing and how to fix it

Why is the numerical accuracy of floating point numbers lost?

I won’t systematically explain how floating-point types are composed of base mantissa exponents. I’ll just say why: since exponents base 2 in binary notation must be multiples of 2, that is, only 0.5, 0.25, 0.125… By doing so, we can see that no matter how we combine the numbers, we can never get to 0.3, so the computer will give us an approximation of the sum of the numbers that comes closest to 0.3.

Therefore, we can draw the following conclusions about precision loss:

  • In Swift, integers do not lose precision because the span of integers is 1, which can be represented by base 2
  • Due to the way the Swift programming language stores floating-point types, floating-point (Double/Float) accuracy loss is bound to occur

Effects of loss of numerical accuracy

Above we simply explained why the loss of precision, so when does the loss of precision affect us?

Based on my experience, I think the main scenarios are as follows:

  • When a number needs to be displayed in literal form to the outside world
  • When numbers need to be sent to the server for strict comparison (no difference between each bit)

So, accuracy loss is not a terrible thing (at least it’s rare). Let’s take a look at how we can fix it when we do have an accuracy loss problem

How to deal with loss of numerical accuracy

  1. Double is used throughout the calculation and is converted to a string

    Since Swift keeps a lot of decimal places when it loses accuracy (0.3, for example, is stored as 0.299999999999999), the difference between these decimal places and the true value is so small that we can do nothing with them during the process and still leave them as Double. We round it to a string at the last minute to send it to the server or display, and the result is almost flawless.

    But remember not to round in the calculation process, otherwise it is very likely to cause the accumulation of errors, resulting in errors become unacceptable.

  2. Receive and evaluate in Decimal format

    The above approach is simple, requiring only a last-minute string conversion, but it has a drawback: you have to ask the server to convert the original numeric type to a string to receive it, which is not a friendly approach. So is there any way we can get an app to send a JSON packet to the server with an unmissable floating-point number? For example, {“num”: 0.3} instead of {“num”: 0.29999999999999999}

    The answer is yes. Swift provides us with a type for decimal computation: Decimal, which also has +, -, *, and/operators and supports Codable protocols, we can define this type to take server parameter values, calculate and use them in block order. Finally, because it supports Codable protocols, We can put the value directly into the JSON package. There are no special cases where we avoid binary floating-point numbers altogether, so that there is no error

NSDecimalNumber is different from Decimal

NSDecimalNumber is a subclass of NSNumber, which is much more powerful than NSNumber, rounding, rounding, automatically removing useless zeros from numeric values, and so on. Because NSDecimalNumber is more precise, it takes longer than the basic data type, so there are trade-offs. Apple officially recommends using it for currency and high-precision scenarios.

Typically we use the formatter NSDecimalNumberHandler to set the format that it needs to constrain, and then build the desired NSDecimalNumber

let ouncesDecimal: NSDecimalNumber = NSDecimalNumber(value: doubleValue)
let behavior: NSDecimalNumberHandler = NSDecimalNumberHandler(roundingMode: mode,
                                                              scale: Int16(decimal),
                                                              raiseOnExactness: false,
                                                              raiseOnOverflow: false,
                                                              raiseOnUnderflow: false,
                                                              raiseOnDivideByZero: false)
let roundedOunces: NSDecimalNumber = ouncesDecimal.rounding(accordingToBehavior: behavior)
Copy the code

NSDecimalNumber and Decimal are essentially seamlessly bridged, Decimal is a value type Struct, NSDecimalNumber is a reference type Class, It looks like NSDecimalNumber is more versatile, but if you just want to set bits and round them in a way that requires Decimal, you can do it with better performance, So I think NSDecimalNumber is only used as a backup when Decimal can’t do something.

In general, the relationship between NSDecimalNumber and Decimal is similar to that between NSString and String.

DecimalThe correct way to use

The correct use ofjsonDeserialization pairDecimalPerform an assignment — useObjectMapper

When we declare a Decimal property and then assign it to a JSON string, we find that the precision is still lost. Why is this?

struct Money: Codable {
    let amount: Decimal
    let currency: String
}

let json = "{\"amount\": 9021.234891.\"currency\": \"CNY\"}"
let jsonData = json.data(using: .utf8)!
let decoder = JSONDecoder(a)let money = try! decoder.decode(Money.self, from: jsonData)
print(money.amount)
Copy the code

The answer is simple: Our JSONDecoder() uses JSONSerialization() internally to deserialize. The logic is very simple. When it comes to the number 9021.234891, it will treat it as a Double. Converting a Double to Decimal can then be successful, but it is already a lost Double, and the converted Decimal type is also lost.

For this problem, we must be able to control the deserialization process. My current choice is to use ObjectMapper, which has the flexibility to control serialization and deserialization using custom rules.

ObjectMapper does not support Decimal by default. We can create a TransformType that supports Decimal as follows:

open class DecimalTransform: TransformType {
    public typealias Object = Decimal
    public typealias JSON = Decimal

    public init(a) {}

    open func transformFromJSON(_ value: Any?) -> Decimal? {
        if let number = value as? NSNumber {
            return Decimal(string: number.description)
        } else if let string = value as? String {
            return Decimal(string: string)
        }
        return nil
    }

    open func transformToJSON(_ value: Decimal?). -> Decimal? {
        return value
    }
}
Copy the code

We then apply this TransformType to the properties we want to transform

struct Money: Mappable {
    var amount: Decimal?
    var currency: String?

    init(a){}init?(map: Map){}mutating func mapping(map: Map) {
        amount <- (map["amount"].DecimalTransform())
        currency <- map["currency"]}}Copy the code

The correct use ofDecimalThe initialization mode of

There are many ways to initialize Decimal, we can pass in integer values, we can pass in floating-point values, we can pass in strings, and I think the correct way to initialize Decimal is to use strings.

The graph above should be a pretty straightforward illustration of why I think so. The reason for this is similar to the previous antisequence problem, and also because when we pass in a Double, Swift loads it once, which causes it to lose its precision, initializing Decimal from a Double that has lost its precision, It makes sense that this Decimal is missing precision

reference

  • Decoding money from JSON in Swift