[JAVA] Decimal numbers are dangerous in programming, right? The story.

Great care should be taken when dealing with decimal numbers in programming. I'm sure some of you have started writing programs for work this spring, so I'll explain it as clearly as possible and with reasons for the purpose of calling attention. It's common sense for those who know it, but it's a big pitfall for those who don't. If anyone seems to know how dangerous it is, please let me know.

Why are decimals dangerous?

The bottom line is that computers handle numbers in decimal, while the real world mostly calculates in decimal. It is dangerous to calculate decimal numbers while converting the radix.

So why is it dangerous to handle decimals while converting radix? First, let's talk about decimal numbers and decimal numbers.

What is the ternary number 0.1?

The ternary number 0.1 is a number that becomes 1 when added 3 times. 0.1 + 0.1 = 0.2. 0.2 + 0.1 is carried up to 1. Expressed as a fraction, it is 1/3. So what is 1/3 in decimal? A number that becomes 1 when added 3 times. that is 0.3333333333333333333 ・ ・ ・ ・ ・ ・ It's an infinite decimal. 0.1, which was a finite decimal number in decimal, becomes an infinite decimal in decimal. And because computer resources are finite, we can't hold infinite decimals. The 64-bit floating point type can hold up to 16 decimal places. Therefore, when you have a decimal on your computer, you may have an approximate value. If the computer had numbers in decimal, one-third would be kept as 0.3333333333333333. This is dangerous.

A value that cannot be expressed in binary.

I described that the decimal number that can be expressed differs depending on the radix, using 1/3 as an example. So what is a decimal number that becomes an infinite decimal number in binary? For example, the decimal number 0.2 cannot be represented in decimal. 0. 001100110011 ... And infinite decimal. PDF of tmt's mathemaTeX page! explains in detail why it becomes 0.001100110011 .... So the decimal number 0.2 cannot be exactly held by a computer (a normal floating point variable). It will have an approximate value.

Calculation error occurs.

This causes problems when dealing with decimals in programming.

const num = 0.2 + 0.1;
console.log("0.2 + 0.1 = " + num);
> 0.2 + 0.1 = 0.30000000000000004

It should be 0.3, but it has increased by 0.00000000000000004. This is the problem. If you use decimals for pay-as-you-go calculations, you may get erroneous billing. That's horrible.

Then what should we do?

Instead of dealing with decimals with regular variables, let's calculate through the library. For JavaScript, use BigDecimal.js or bignumber.js .. In Java, let's calculate using the BigDecimal class of the standard library. In other languages, there are libraries with names like "Big Decimal" and "decimal".


If you want to calculate decimals accurately, try to calculate through the library. However, when it is not necessary to calculate so accurately, such as drawing height calculation, it may be better to calculate with plain variables instead of the library. This is because the library may sacrifice speed to calculate accurately.

that's all.

Recommended Posts

Decimal numbers are dangerous in programming, right? The story.
The story of learning Java in the first programming
What are the rules in JUnit?
Rails refactoring story learned in the field
The story of AppClip support in Anyca
The story of writing Java in Emacs
The story of low-level string comparison in Java
The story of making ordinary Othello in Java
A story about the JDK in the Java 11 era