Why is 0.1 displayed as 0.1 when printed, even though 0.1 cannot be accurately represented by a floating point number?

Substituting 0.1 for x and displaying it on the console will display 0.1.

x = 0.1
print(x)
# => 0.1

It seems obvious, but 0.1 cannot be accurately represented by floating point numbers (IEEE 754). However, it is strange that 0.1 is displayed when print is done. I would like to write what I learned about this.

environment

This article uses Python 3.7.

[Premise] Floating point number

In this article, the term "floating point number" hereafter refers to "IEEE 754 double precision".

The floating-point number format is a number converted to the following format, with sign, `ʻexp, and frac`` arranged in that order.

(-1)^{sign} \times 2^{exp - 1023} \times (1 + frac \times 2^{-52})

The names and ranges of each symbol are as follows.

symbol Japanese name English name range
sign Code sign 0 or 1
exp index exponent from 1 to 2,046
frac mantissa fraction 0 to 4,503,599,627,370,495

See here for how to convert decimal numbers to this format. http://www.picfun.com/mathlib02.html

Here is an easy-to-understand way to convert a decimal decimal number to a binary decimal number. https://mathwords.net/syosuu2sin

Decimal decimals are often recurring decimals in decimal. [^ 1] In the process of converting a decimal decimal number to a binary number, if the operation is terminated at a finite digit, an error will occur with the number before conversion (rounding error).

[^ 1]: For example, of the numbers greater than 0 and less than 1 and represented within 3 decimal places (999), only 7 do not become recurring decimals when converted to binary. ..

For example, when 0.1 is expressed as a floating point number

Converts 0.1 to a floating point number. The following tools were used for the conversion. https://tools.m-bsys.com/calculators/ieee754.php

to_float_point_0_1.png

0.1 is the binary representation of floating point numbers

Code= 0
index= 01111111011
mantissa= 1001100110011001100110011001100110011001100110011010

When converted to decimal,

Code= 0
index= 1019
mantissa= 2702159776422298

is.

When applied to the floating point formula above,

\begin{align}
&(-1)^{0} \times 2^{1019-1023} \times (1 + 2702159776422298 \times 2^{-52})\\
&= 1 \times 2^{-4} \times (2^{52} \times 2^{-52} + 2702159776422298 \times 2^{-52})\\
&=  2^{-4} \times (4503599627370496 \times 2^{-52} + 2702159776422298 \times 2^{-52})\\
&=  2^{-4} \times 7205759403792794 \times 2^{-52}\\
&=  7205759403792794 \times 2^{-56}\\
&=  \frac{7205759403792794}{72057594037927936}\\
&= 0.1000000000000000055511151231257827021181583404541015625
\end{align}

And 0.1 on the computer

0.1000000000000000055511151231257827021181583404541015625

You can see that it is treated as.

This is the reason why, for example, when you add 0.1 three times, it doesn't fit 0.3 (to be exact, it doesn't appear rather than "doesn't").

print(0.1 + 0.1 + 0.1)
# => 0.30000000000000004

Behavior when printing

Why is 0.1 displayed as 0.1 when print is displayed even though 0.1 cannot be accurately represented by a floating point number, but on the official Python page The answer is written.

In old Python, prompts and repr () built-in functions picked and displayed decimal values such as 0.10000000000000001 with 17 significant digits. Starting with Python 3.1, most situations choose the shortest decimal value, such as 0.1. https://docs.python.org/ja/3/tutorial/floatingpoint.html

... apparently ...

In other words, the shortest floating-point number that has the same representation as 0.1 is selected. ~~ It seems that 0.1 is displayed as 0.1 in languages other than Python for the same reason. ~~ (Correction: For languages other than Python, please check the official documentation of each language)

float_point_0_1.png

Conversely, for example, if the number is very close to 0.1, it will be displayed as 0.1.

print(0.1000000000000000056)
# => 0.1

As far as I checked,

0.099999999999999998612221219218554324470460414886474609375
From
0.100000000000000012490009027033011079765856266021728515625

Up to is the range that is displayed as 0.1 when print is done. This is because the numbers in this range are converted to the same floating point numbers as 0.1.

Unless there is a particular shortest expression, it seems to display a number with 17 significant figures.

print(0.12345678901234567890)
#=> 0.12345678901234568

bonus

To specify the number of digits after the decimal point in Python: You can confirm that there is an error between the input value and the output.

print(f'{0.1:.20f}')
# => 0.10000000000000000555

I wrote a sequel

Now,

This is the reason why, for example, when you add 0.1 three times, it doesn't fit 0.3 (to be exact, it doesn't appear rather than "doesn't").

print(0.1 + 0.1 + 0.1)
# => 0.30000000000000004

I wrote. In the case of a floating point number of 0.1 expressed in decimal, 5 appears for the first time in the 18th digit, but when 0.1 is added 3 times, 4 appears in the 17th digit. This does not fit the calculation.

The following article explained this phenomenon.

-The floating point number of 0.1 is larger than 0.1, but why is it smaller than 1.0 when added 10 times [Part 1] -The floating point number of 0.1 is larger than 0.1, but why is it smaller than 1.0 when added 10 times? [Part 2]

Recommended Posts

Why is 0.1 displayed as 0.1 when printed, even though 0.1 cannot be accurately represented by a floating point number?
Why Python slicing is represented by a colon (:)
Why is the floating point number of 0.1 larger than 0.1, but when added 10 times, it is smaller than 1.0 [Part 1]
Why is the floating point number of 0.1 larger than 0.1, but when added 10 times, it is smaller than 1.0 [Part 2]