This is the 4th day of my participation in the August More Text Challenge
In our daily development, it is inevitable that we will encounter some accumulative calculation. Sometimes we will use double, but in the testing process, we will find that the calculation result is not the expected result.
First said the float and double types is mainly designed to scientific computing, they will come through the way of binary floating point arithmetic, under normal circumstances, there is no question of their calculations, and can quickly solve the problem of calculation, but in some financial scenarios, it can meet the precision results we want;
Since binary cannot express decimal exactly with respect to decimal places, it is impossible to express any negative power of 10 with respect to decimal places. Here is an example:
Double d1 = 0.11; Double d2 = 0.2; System.out.println(d2-d1);Copy the code
The result of this is: 0.09000000000000001; Have you noticed that the outcome of things is not satisfactory, what it is output; Some people may look at suffixes and say, there are so many mantras that you don’t need to be precise, but the data in the real world is so large and variable that you lose the accuracy of the value data if you remove suffixes completely. Java also provides a solution ->Bigdecimal
Those of you who have done financial calculations must have come into contact with this class. If not, please go back and deal with this bug consciously. Of course, sometimes int and Long can solve some of the problems but I think there are some limitations in terms of scenarios and applications;
BigDecimal makes it easy to deal with multiple data formats, not to mention addition, subtraction, multiplication, and division. The setScale() method for dealing with decimal places can take as many decimal places as you want, and how you want to take them; Another handy place to remove mantissa zeros is stripTrailingZeros(). Some data becomes scientific if the toString() method is used after the zeros are removed, while toPlainString() is output using universal notation.
BigDecimal a = new BigDecimal (" 0.12300 "); System.out.println(a.stripTrailingZeros().toPlainString());Copy the code
The result: 0.123
Since BigDecimal is so simple, why do we prefer not to use it when we have to use it? We know that numbers in BigDecimal are not assigned initial values, so here’s a quick note:
A problem with the base member default
This will cause an error;
So there is no error, output: 0; So a variable is scoped across the class, and instead of manually assigning an initial value to it, the JVM automatically assigns a value of the same type,
For non-basic types, class variables will output NULL, so normally we have to initialize them, and local variables we have to initialize them first, otherwise compilation won’t work;
Okay, back to BigDecimal, it’s cumbersome compared to basic arithmetic, and because it creates objects, it’s also a little bit slower. Of course, it’s not fast enough for machines to do;
To conclude: If you feel that you are not very precise about your data and want to make it easy to compute, basic data arithmetic is fine, no need to go around new BigDecimal(); The BigDecimal approach should save you a lot of hassle if you are very data-intensive and want to do something with your data;