** The code in this article is written in Java SE 8. ** **
If you are comparing double type variables with ==
, you will be pointed out that it is not good with static analysis tools, it is a bug.
I was curious about various things, so I tried to find out how to handle floating point numbers (now).
If you don't care about performance, use Big Decimal. If you can change the design of the database, you can shift the digits and handle it with int or long.
First of all, please take a look at this one.
code
double doubleValue1 = 1.0d;
double doubleValue2 = 0.9d;
double resultValue = doubleValue1 - doubleValue2;
System.out.println("doubleValue1: " + doubleValue1);
System.out.println("doubleValue2: " + doubleValue2);
System.out.println(resultValue == 0.1d);
System.out.println("Subtraction result: " + resultValue);
Execution result
doubleValue1: 1.0
doubleValue2: 0.9
false
Subtraction result: 0.09999999999999998
The subtraction result is supposed to be great.
Computers handle numbers in binary.
However, some decimals, such as 0.9
, become recurring decimals when expressed in binary.
Since a computer can handle only a finite number of digits, it is rounded to an appropriate value and an error occurs.
Since the subtraction is done with a value other than 0.9
, the result is as above.
In the core banking system, even an error of 1 yen can lead to a proceeding, which is a big problem.
Use BigDecimal to solve the problem!
code
BigDecimal bigDecimalValue1 = new BigDecimal("1.0");
BigDecimal bigDecimalValue2 = new BigDecimal("0.9");
BigDecimal bigDecimalResultValue = bigDecimalValue1.subtract(bigDecimalValue2);
System.out.println("bigDecimalValue1: " + bigDecimalValue1);
System.out.println("bigDecimalValue2: " + bigDecimalValue2);
System.out.println(bigDecimalResultValue.equals(new BigDecimal("0.1")));
System.out.println("Subtraction result: " + bigDecimalResultValue);
Execution result
bigDecimalValue1: 1.0
bigDecimalValue2: 0.9
true
Subtraction result: 0.1
BigDecimal holds the value in the form of the decimal part shifted to the integer part. Since all integers can be represented by finite binary numbers, Big Decimal can be used to accurately calculate and compare floating point numbers.
The method is Refer to Official Reference.
Since BigDecimal is a reference type, it is naturally slower than basic data type operations. Let's check.
code
double doubleValue1 = 1.0d;
double doubleValue2 = 0.9d;
long startDouble = System.currentTimeMillis();
//Subtraction 100 million times(double)
for(int i = 0; i < 100_000_000 ; i++){
double resultValue = doubleValue1 - doubleValue2;
}
long endDouble = System.currentTimeMillis();
BigDecimal bigDecimalValue1 = new BigDecimal("1.0");
BigDecimal bigDecimalValue2 = new BigDecimal("0.9");
long startBigDecimal = System.currentTimeMillis();
//Subtraction 100 million times(BigDecimal)
for(int i = 0; i < 100_000_000 ; i++){
BigDecimal bigDecimalResultValue = bigDecimalValue1.subtract(bigDecimalValue2);
}
long endBigDecimal = System.currentTimeMillis();
System.out.println("double measurement result: " + (endDouble - startDouble) + "ms");
System.out.println("bigDecimal measurement result: " + (endBigDecimal - startBigDecimal) + "ms");
Execution result
double measurement result: 3ms
bigDecimal measurement result: 314ms
There was a considerable difference. After all, the basic data type is fast!
Well, I personally wonder if there is no problem with Big Decimal unless there are a lot of loops or performance is required.
There are times when you don't have to force yourself to use floating point numbers.
For example, suppose you have a system that registers your weight up to the first decimal place. Even if it is received at 65.3 kg in terms of interface, it can be processed normally with int if it is shifted by one in the upstream part of the program and handled internally at 653. Free yourself from the hassle of floating point processing!
However, it also affects the column definition of the database, so it is good to decide firmly at the design stage.
Recommended Posts